Sample records for scan conversion algorithm

  1. Novel grid-based optical Braille conversion: from scanning to wording

    NASA Astrophysics Data System (ADS)

    Yoosefi Babadi, Majid; Jafari, Shahram

    2011-12-01

    Grid-based optical Braille conversion (GOBCO) is explained in this article. The grid-fitting technique involves processing scanned images taken from old hard-copy Braille manuscripts, recognising and converting them into English ASCII text documents inside a computer. The resulted words are verified using the relevant dictionary to provide the final output. The algorithms employed in this article can be easily modified to be implemented on other visual pattern recognition systems and text extraction applications. This technique has several advantages including: simplicity of the algorithm, high speed of execution, ability to help visually impaired persons and blind people to work with fax machines and the like, and the ability to help sighted people with no prior knowledge of Braille to understand hard-copy Braille manuscripts.

  2. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    PubMed

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20- or derived 17-segment models was confirmed to be 5% myocardium abnormal, corresponding to a summed stress score greater than 3. Of note, the 17-segment model demonstrated a trend toward fewer mildly abnormal scans and more normal and severely abnormal scans. An algorithm for conversion of 20-segment perfusion scores to 17-segment scores has been developed that is highly concordant with expert visual analysis by the 17-segment model and provides nearly identical prognostic information. This conversion model may provide a mechanism for comparison of studies analyzed by the 17-segment system with previous studies analyzed by the 20-segment approach.

  3. Converting optical scanning holograms of real objects to binary Fourier holograms using an iterative direct binary search algorithm.

    PubMed

    Leportier, Thibault; Park, Min Chul; Kim, You Seok; Kim, Taegeun

    2015-02-09

    In this paper, we present a three-dimensional holographic imaging system. The proposed approach records a complex hologram of a real object using optical scanning holography, converts the complex form to binary data, and then reconstructs the recorded hologram using a spatial light modulator (SLM). The conversion from the recorded hologram to a binary hologram is achieved using a direct binary search algorithm. We present experimental results that verify the efficacy of our approach. To the best of our knowledge, this is the first time that a hologram of a real object has been reconstructed using a binary SLM.

  4. The BRL-CAD Package: An Overview

    DTIC Science & Technology

    2013-04-01

    many different display devices to be supported. The types of primatives supported include: arbitrary boxes of up to eight verticies, ellipsoids...file size. Many algorithms simply run until all of the data is gone, and some don’t even care about scan lines at aiL 5.2. Format Conversion Several

  5. Denoising of polychromatic CT images based on their own noise properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji Hye; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr

    Purpose: Because of high diagnostic accuracy and fast scan time, computed tomography (CT) has been widely used in various clinical applications. Since the CT scan introduces radiation exposure to patients, however, dose reduction has recently been recognized as an important issue in CT imaging. However, low-dose CT causes an increase of noise in the image and thereby deteriorates the accuracy of diagnosis. In this paper, the authors develop an efficient denoising algorithm for low-dose CT images obtained using a polychromatic x-ray source. The algorithm is based on two steps: (i) estimation of space variant noise statistics, which are uniquely determinedmore » according to the system geometry and scanned object, and (ii) subsequent novel conversion of the estimated noise to Gaussian noise so that an existing high performance Gaussian noise filtering algorithm can be directly applied to CT images with non-Gaussian noise. Methods: For efficient polychromatic CT image denoising, the authors first reconstruct an image with the iterative maximum-likelihood polychromatic algorithm for CT to alleviate the beam-hardening problem. We then estimate the space-variant noise variance distribution on the image domain. Since there are many high performance denoising algorithms available for the Gaussian noise, image denoising can become much more efficient if they can be used. Hence, the authors propose a novel conversion scheme to transform the estimated space-variant noise to near Gaussian noise. In the suggested scheme, the authors first convert the image so that its mean and variance can have a linear relationship, and then produce a Gaussian image via variance stabilizing transform. The authors then apply a block matching 4D algorithm that is optimized for noise reduction of the Gaussian image, and reconvert the result to obtain a final denoised image. To examine the performance of the proposed method, an XCAT phantom simulation and a physical phantom experiment were conducted. Results: Both simulation and experimental results show that, unlike the existing denoising algorithms, the proposed algorithm can effectively reduce the noise over the whole region of CT images while preventing degradation of image resolution. Conclusions: To effectively denoise polychromatic low-dose CT images, a novel denoising algorithm is proposed. Because this algorithm is based on the noise statistics of a reconstructed polychromatic CT image, the spatially varying noise on the image is effectively reduced so that the denoised image will have homogeneous quality over the image domain. Through a simulation and a real experiment, it is verified that the proposed algorithm can deliver considerably better performance compared to the existing denoising algorithms.« less

  6. Clustering of color map pixels: an interactive approach

    NASA Astrophysics Data System (ADS)

    Moon, Yiu Sang; Luk, Franklin T.; Yuen, K. N.; Yeung, Hoi Wo

    2003-12-01

    The demand for digital maps continues to arise as mobile electronic devices become more popular nowadays. Instead of creating the entire map from void, we may convert a scanned paper map into a digital one. Color clustering is the very first step of the conversion process. Currently, most of the existing clustering algorithms are fully automatic. They are fast and efficient but may not work well in map conversion because of the numerous ambiguous issues associated with printed maps. Here we introduce two interactive approaches for color clustering on the map: color clustering with pre-calculated index colors (PCIC) and color clustering with pre-calculated color ranges (PCCR). We also introduce a memory model that could enhance and integrate different image processing techniques for fine-tuning the clustering results. Problems and examples of the algorithms are discussed in the paper.

  7. Corpus-based Customization for an Ontology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2010-09-14

    CCAT scans a corpus of text for terms, and computes lexical similarity between corpus terms and taxonomy terms. Based on a set of metrics and a learning algorithm, the system inserts corpus terms into the taxonomy. Conversely, terms from the taxonomy are disambiguated based on the text in the corpus. Unused terms are discarded, and infrequently used senses of terms are collapsed to make the taxonomy more manageable.

  8. Resampling algorithm for the Spatial Infrared Imaging Telescope (SPIRIT III) Fourier transform spectrometer

    NASA Astrophysics Data System (ADS)

    Sargent, Steven D.; Greenman, Mark E.; Hansen, Scott M.

    1998-11-01

    The Spatial Infrared Imaging Telescope (SPIRIT III) is the primary sensor aboard the Midcourse Space Experiment (MSX), which was launched 24 April 1996. SPIRIT III included a Fourier transform spectrometer that collected terrestrial and celestial background phenomenology data for the Ballistic Missile Defense Organization (BMDO). This spectrometer used a helium-neon reference laser to measure the optical path difference (OPD) in the spectrometer and to command the analog-to-digital conversion of the infrared detector signals, thereby ensuring the data were sampled at precise increments of OPD. Spectrometer data must be sampled at accurate increments of OPD to optimize the spectral resolution and spectral position of the transformed spectra. Unfortunately, a failure in the power supply preregulator at the MSX spacecraft/SPIRIT III interface early in the mission forced the spectrometer to be operated without the reference laser until a failure investigation was completed. During this time data were collected in a backup mode that used an electronic clock to sample the data. These data were sampled evenly in time, and because the scan velocity varied, at nonuniform increments of OPD. The scan velocity profile depended on scan direction and scan length, and varied over time, greatly degrading the spectral resolution and spectral and radiometric accuracy of the measurements. The Convert software used to process the SPIRIT III data was modified to resample the clock-sampled data at even increments of OPD, using scan velocity profiles determined from ground and on-orbit data, greatly improving the quality of the clock-sampled data. This paper presents the resampling algorithm, the characterization of the scan velocity profiles, and the results of applying the resampling algorithm to on-orbit data.

  9. Seasat low-rate data system

    NASA Technical Reports Server (NTRS)

    Brown, J. W.; Cleven, G. C.; Klose, J. C.; Lame, D. B.; Yamarone, C. A.

    1979-01-01

    The Seasat low-rate data system, an end-to-end data-processing and data-distribution system for the four low-rate sensors (radar altimeter, Seasat-A scatterometer system, scanning multichannel microwave radiometer, and visible and infrared radiometer) carried aboard the satellite, is discussed. The function of the distributed, nonreal-time, magnetic-tape system is to apply necessary calibrations, corrections, and conversions to yield geophysically meaningful products from raw telemetry data. The algorithms developed for processing data from the different sensors are described, together with the data catalogs compiled.

  10. A CT and MRI scan to MCNP input conversion program.

    PubMed

    Van Riper, Kenneth A

    2005-01-01

    We describe a new program to read a sequence of tomographic scans and prepare the geometry and material sections of an MCNP input file. Image processing techniques include contrast controls and mapping of grey scales to colour. The user interface provides several tools with which the user can associate a range of image intensities to an MCNP material. Materials are loaded from a library. A separate material assignment can be made to a pixel intensity or range of intensities when that intensity dominates the image boundaries; this material is assigned to all pixels with that intensity contiguous with the boundary. Material fractions are computed in a user-specified voxel grid overlaying the scans. New materials are defined by mixing the library materials using the fractions. The geometry can be written as an MCNP lattice or as individual cells. A combination algorithm can be used to join neighbouring cells with the same material.

  11. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  12. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  13. The utility of repeat sestamibi scans in patients with primary hyperparathyroidism after an initial negative scan.

    PubMed

    Krishnamurthy, Vikram D; Sound, Sara; Okoh, Alexis K; Yazici, Pinar; Yigitbas, Hakan; Neumann, Donald; Doshi, Krupa; Berber, Eren

    2017-06-01

    We analyzed the utility of repeated sestambi scans in patients with primary hyperparathyroidism and its effects on operative referral. We carried out a retrospective review of patients with primary hyperparathyroidism who underwent repeated sestambi scans exclusively within our health system between 1996-2015. Patient demographic, presentation, laboratory, imaging, operative, and pathologic data were reviewed. Univariate analysis with JMP Pro v12 was used to identify factors associated with conversion from an initial negative to a subsequent positive scan. After exclusion criteria (including reoperations), we identified 49 patients in whom 59% (n = 29) of subsequent scans remained negative and 41% (n = 20) converted to positive. Factors associated with an initial negative to a subsequent positive scan included classic presentation and second scans with iodine subtraction (P = .04). Nonsurgeons were less likely to order an iodine-subtraction scan (P < .05). Fewer patients with negative imaging were referred to surgery (33% vs 100%, P = .005), and median time to operation after the first negative scan was 25 months (range 1.4-119). Surgeon-performed ultrasonography had greater sensitivity and positive predictive value than repeated sestamibi scans. Negative sestambi scans decreased and delayed operative referral. Consequently, we identified several process improvement initiatives, including education regarding superior institutional imaging. Combining all findings, we created an algorithm for evaluating patients with primary hyperparathyroidism after initially negative sestamibi scans, which incorporates surgeon-performed ultrasonography. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. SU-F-I-24: Feasibility of Magnetic Susceptibility to Relative Electron Density Conversion Method for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ito, K; Kadoya, N; Chiba, M

    2016-06-15

    Purpose: The aim of this study is to develop radiation treatment planning using magnetic susceptibility obtained from quantitative susceptibility mapping (QSM) via MR imaging. This study demonstrates the feasibility of a method for generating a substitute for a CT image from an MRI. Methods: The head of a healthy volunteer was scanned using a CT scanner and a 3.0 T MRI scanner. The CT imaging was performed with a slice thickness of 2.5 mm at 80 and 120 kV (dual-energy scan). These CT images were converted to relative electron density (rED) using the CT-rED conversion table generated by a previousmore » dual-energy CT scan. The CT-rED conversion table was generated using the conversion of the energy-subtracted CT number to rED via a single linear relationship. One T2 star-weighted 3D gradient echo-based sequence with four different echo times images was acquired using the MRI scanner. These T2 star-weighted images were used to estimate the phase data. To estimate the local field map, a Laplacian unwrapping of the phase and background field removal algorithm were implemented to process phase data. To generate a magnetic susceptibility map from the local field map, we used morphology enabled dipole inversion method. The rED map was resampled to the same resolution as magnetic susceptibility, and the magnetic susceptibility-rED conversion table was obtained via voxel-by-voxel mapping between the magnetic susceptibility and rED maps. Results: A correlation between magnetic susceptibility and rED is not observed through our method. Conclusion: Our results show that the correlation between magnetic susceptibility and rED is not observed. As the next step, we assume that the voxel of the magnetic susceptibility map comprises two materials, such as water (0 ppm) and bone (-2.2 ppm) or water and marrow (0.81ppm). The elements of each voxel were estimated from the ratio of the two materials.« less

  15. Text, photo, and line extraction in scanned documents

    NASA Astrophysics Data System (ADS)

    Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan

    2012-07-01

    We propose a page layout analysis algorithm to classify a scanned document into different regions such as text, photo, or strong lines. The proposed scheme consists of five modules. The first module performs several image preprocessing techniques such as image scaling, filtering, color space conversion, and gamma correction to enhance the scanned image quality and reduce the computation time in later stages. Text detection is applied in the second module wherein wavelet transform and run-length encoding are employed to generate and validate text regions, respectively. The third module uses a Markov random field based block-wise segmentation that employs a basis vector projection technique with maximum a posteriori probability optimization to detect photo regions. In the fourth module, methods for edge detection, edge linking, line-segment fitting, and Hough transform are utilized to detect strong edges and lines. In the last module, the resultant text, photo, and edge maps are combined to generate a page layout map using K-Means clustering. The proposed algorithm has been tested on several hundred documents that contain simple and complex page layout structures and contents such as articles, magazines, business cards, dictionaries, and newsletters, and compared against state-of-the-art page-segmentation techniques with benchmark performance. The results indicate that our methodology achieves an average of ˜89% classification accuracy in text, photo, and background regions.

  16. Impact of non-specific normal databases on perfusion quantification of low-dose myocardial SPECT studies.

    PubMed

    Scabbio, Camilla; Zoccarato, Orazio; Malaspina, Simona; Lucignani, Giovanni; Del Sole, Angelo; Lecchi, Michela

    2017-10-17

    To evaluate the impact of non-specific normal databases on the percent summed rest score (SR%) and stress score (SS%) from simulated low-dose SPECT studies by shortening the acquisition time/projection. Forty normal-weight and 40 overweight/obese patients underwent myocardial studies with a conventional gamma-camera (BrightView, Philips) using three different acquisition times/projection: 30, 15, and 8 s (100%-counts, 50%-counts, and 25%-counts scan, respectively) and reconstructed using the iterative algorithm with resolution recovery (IRR) Astonish TM (Philips). Three sets of normal databases were used: (1) full-counts IRR; (2) half-counts IRR; and (3) full-counts traditional reconstruction algorithm database (TRAD). The impact of these databases and the acquired count statistics on the SR% and SS% was assessed by ANOVA analysis and Tukey test (P < 0.05). Significantly higher SR% and SS% values (> 40%) were found for the full-counts TRAD databases respect to the IRR databases. For overweight/obese patients, significantly higher SS% values for 25%-counts scans (+19%) are confirmed compared to those of 50%-counts scan, independently of using the half-counts or the full-counts IRR databases. Astonish TM requires the adoption of the own specific normal databases in order to prevent very high overestimation of both stress and rest perfusion scores. Conversely, the count statistics of the normal databases seems not to influence the quantification scores.

  17. ASR-9 processor augmentation card (9-PAC) phase II scan-scan correlator algorithms

    DOT National Transportation Integrated Search

    2001-04-26

    The report documents the scan-scan correlator (tracker) algorithm developed for Phase II of the ASR-9 Processor Augmentation Card (9-PAC) project. The improved correlation and tracking algorithms in 9-PAC Phase II decrease the incidence of false-alar...

  18. Radar Array Processing of Experimental Data Via the Scan-MUSIC Algorithm

    DTIC Science & Technology

    2004-06-01

    Radar Array Processing of Experimental Data Via the Scan- MUSIC Algorithm by Canh Ly ARL-TR-3135 June 2004...Processing of Experimental Data Via the Scan- MUSIC Algorithm Canh Ly Sensors and Electron Devices Directorate, ARL...NUMBER 5b. GRANT NUMBER 4. TITLE AND SUBTITLE Radar Array Processing of Experimental Data Via the Scan- MUSIC Algorithm 5c. PROGRAM ELEMENT NUMBER 5d

  19. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  20. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees

    PubMed Central

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-01-01

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597

  1. Characterizing the Nature of Scan Results Discussions: Insights Into Why Patients Misunderstand Their Prognosis

    PubMed Central

    Singh, Sarguni; Cortez, Dagoberto; Maynard, Douglas; Cleary, James F.; DuBenske, Lori

    2017-01-01

    Introduction: Patients with incurable cancer have poor prognostic awareness. We present a detailed analysis of the dialogue between oncologists and patients in conversations with prognostic implications. Methods: A total of 128 audio-recorded encounters from a large multisite trial were obtained, and 64 involved scan results. We used conversation analysis, a qualitative method for studying human interaction, to analyze typical patterns and conversational devices. Results: Four components consistently occurred in sequential order: symptom-talk, scan-talk, treatment-talk, and logistic-talk. Six of the encounters (19%) were identified as good news, 15 (45%) as stable news, and 12 (36%) as bad news. The visit duration varied by the type of news: good, 15 minutes (07:00-29:00); stable, 17 minutes (07:00-41:00); and bad, 20 minutes (07:00-28:00). Conversational devices were common, appearing in half of recordings. Treatment-talk occupied 50% of bad-news encounters, 31% of good-news encounters, and 19% of stable-news encounters. Scan-talk occupied less than 10% of all conversations. There were only four instances of frank prognosis discussion. Conclusion: Oncologists and patients are complicit in constructing the typical encounter. Oncologists spend little time discussing scan results and the prognostic implications in favor of treatment-related talk. Conversational devices routinely help transition from scan-talk to detailed discussions about treatment options. We observed an opportunity to create prognosis-talk after scan-talk with a new conversational device, the question “Would you like to talk about what this means?” as the oncologist seeks permission to disclose prognostic information while ceding control to the patient. PMID:28095172

  2. Five-dimensional ultrasound system for soft tissue visualization.

    PubMed

    Deshmukh, Nishikant P; Caban, Jesus J; Taylor, Russell H; Hager, Gregory D; Boctor, Emad M

    2015-12-01

    A five-dimensional ultrasound (US) system is proposed as a real-time pipeline involving fusion of 3D B-mode data with the 3D ultrasound elastography (USE) data as well as visualization of these fused data and a real-time update capability over time for each consecutive scan. 3D B-mode data assist in visualizing the anatomy of the target organ, and 3D elastography data adds strain information. We investigate the feasibility of such a system and show that an end-to-end real-time system, from acquisition to visualization, can be developed. We present a system that consists of (a) a real-time 3D elastography algorithm based on a normalized cross-correlation (NCC) computation on a GPU; (b) real-time 3D B-mode acquisition and network transfer; (c) scan conversion of 3D elastography and B-mode volumes (if acquired by 4D wobbler probe); and (d) visualization software that fuses, visualizes, and updates 3D B-mode and 3D elastography data in real time. We achieved a speed improvement of 4.45-fold for the threaded version of the NCC-based 3D USE versus the non-threaded version. The maximum speed was 79 volumes/s for 3D scan conversion. In a phantom, we validated the dimensions of a 2.2-cm-diameter sphere scan-converted to B-mode volume. Also, we validated the 5D US system visualization transfer function and detected 1- and 2-cm spherical objects (phantom lesion). Finally, we applied the system to a phantom consisting of three lesions to delineate the lesions from the surrounding background regions of the phantom. A 5D US system is achievable with real-time performance. We can distinguish between hard and soft areas in a phantom using the transfer functions.

  3. Optical Scanning for Retrospective Conversion of Information.

    ERIC Educational Resources Information Center

    Hein, Morten

    1986-01-01

    This discussion of the use of optical scanning and computer formatting for retrospective conversion focuses on a series of applications known as Optical Scanning for Creation of Information Databases (OSCID). Prior research in this area and the usefulness of OSCID for creating low-priced machine-readable data representing older materials are…

  4. Development of a control algorithm for the ultrasound scanning robot (NCCUSR) using ultrasound image and force feedback.

    PubMed

    Kim, Yeoun Jae; Seo, Jong Hyun; Kim, Hong Rae; Kim, Kwang Gi

    2017-06-01

    Clinicians who frequently perform ultrasound scanning procedures often suffer from musculoskeletal disorders, arthritis, and myalgias. To minimize their occurrence and to assist clinicians, ultrasound scanning robots have been developed worldwide. Although, to date, there is still no commercially available ultrasound scanning robot, many control methods have been suggested and researched. These control algorithms are either image based or force based. If the ultrasound scanning robot control algorithm was a combination of the two algorithms, it could benefit from the advantage of each one. However, there are no existing control methods for ultrasound scanning robots that combine force control and image analysis. Therefore, in this work, a control algorithm is developed for an ultrasound scanning robot using force feedback and ultrasound image analysis. A manipulator-type ultrasound scanning robot named 'NCCUSR' is developed and a control algorithm for this robot is suggested and verified. First, conventional hybrid position-force control is implemented for the robot and the hybrid position-force control algorithm is combined with ultrasound image analysis to fully control the robot. The control method is verified using a thyroid phantom. It was found that the proposed algorithm can be applied to control the ultrasound scanning robot and experimental outcomes suggest that the images acquired using the proposed control method can yield a rating score that is equivalent to images acquired directly by the clinicians. The proposed control method can be applied to control the ultrasound scanning robot. However, more work must be completed to verify the proposed control method in order to become clinically feasible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Angular Superresolution for a Scanning Antenna with Simulated Complex Scatterer-Type Targets

    DTIC Science & Technology

    2002-05-01

    Approved for public release; distribution unlimited. The Scan- MUSIC (MUltiple SIgnal Classification), or SMUSIC, algorithm was developed by the Millimeter...with the use of a single rotatable sensor scanning in an angular region of interest. This algorithm has been adapted and extended from the MUSIC ...simulation. Abstract ii iii Contents 1. Introduction 1 2. Extension of the MUSIC Algorithm for Scanning Antenna 2 2.1 Subvector Averaging Method

  6. Assistant for Analyzing Tropical-Rain-Mapping Radar Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A document is defined that describes an approach for a Tropical Rain Mapping Radar Data System (TDS). TDS is composed of software and hardware elements incorporating a two-frequency spaceborne radar system for measuring tropical precipitation. The TDS would be used primarily in generating data products for scientific investigations. The most novel part of the TDS would be expert-system software to aid in the selection of algorithms for converting raw radar-return data into such primary observables as rain rate, path-integrated rain rate, and surface backscatter. The expert-system approach would address the issue that selection of algorithms for processing the data requires a significant amount of preprocessing, non-intuitive reasoning, and heuristic application, making it infeasible, in many cases, to select the proper algorithm in real time. In the TDS, tentative selections would be made to enable conversions in real time. The expert system would remove straightforwardly convertible data from further consideration, and would examine ambiguous data, performing analysis in depth to determine which algorithms to select. Conversions performed by these algorithms, presumed to be correct, would be compared with the corresponding real-time conversions. Incorrect real-time conversions would be updated using the correct conversions.

  7. Scanning technology selection impacts acceptability and usefulness of image-rich content.

    PubMed

    Alpi, Kristine M; Brown, James C; Neel, Jennifer A; Grindem, Carol B; Linder, Keith E; Harper, James B

    2016-01-01

    Clinical and research usefulness of articles can depend on image quality. This study addressed whether scans of figures in black and white (B&W), grayscale, or color, or portable document format (PDF) to tagged image file format (TIFF) conversions as provided by interlibrary loan or document delivery were viewed as acceptable or useful by radiologists or pathologists. Residency coordinators selected eighteen figures from studies from radiology, clinical pathology, and anatomic pathology journals. With original PDF controls, each figure was prepared in three or four experimental conditions: PDF conversion to TIFF, and scans from print in B&W, grayscale, and color. Twelve independent observers indicated whether they could identify the features and whether the image quality was acceptable. They also ranked all the experimental conditions of each figure in terms of usefulness. Of 982 assessments of 87 anatomic pathology, 83 clinical pathology, and 77 radiology images, 471 (48%) were unidentifiable. Unidentifiability of originals (4%) and conversions (10%) was low. For scans, unidentifiability ranged from 53% for color, to 74% for grayscale, to 97% for B&W. Of 987 responses about acceptability (n=405), 41% were said to be unacceptable, 97% of B&W, 66% of grayscale, 41% of color, and 1% of conversions. Hypothesized order (original, conversion, color, grayscale, B&W) matched 67% of rankings (n=215). PDF to TIFF conversion provided acceptable content. Color images are rarely useful in grayscale (12%) or B&W (less than 1%). Acceptability of grayscale scans of noncolor originals was 52%. Digital originals are needed for most images. Print images in color or grayscale should be scanned using those modalities.

  8. A masked least-squares smoothing procedure for artifact reduction in scanning-EMG recordings.

    PubMed

    Corera, Íñigo; Eciolaza, Adrián; Rubio, Oliver; Malanda, Armando; Rodríguez-Falces, Javier; Navallas, Javier

    2018-01-11

    Scanning-EMG is an electrophysiological technique in which the electrical activity of the motor unit is recorded at multiple points along a corridor crossing the motor unit territory. Correct analysis of the scanning-EMG signal requires prior elimination of interference from nearby motor units. Although the traditional processing based on the median filtering is effective in removing such interference, it distorts the physiological waveform of the scanning-EMG signal. In this study, we describe a new scanning-EMG signal processing algorithm that preserves the physiological signal waveform while effectively removing interference from other motor units. To obtain a cleaned-up version of the scanning signal, the masked least-squares smoothing (MLSS) algorithm recalculates and replaces each sample value of the signal using a least-squares smoothing in the spatial dimension, taking into account the information of only those samples that are not contaminated with activity of other motor units. The performance of the new algorithm with simulated scanning-EMG signals is studied and compared with the performance of the median algorithm and tested with real scanning signals. Results show that the MLSS algorithm distorts the waveform of the scanning-EMG signal much less than the median algorithm (approximately 3.5 dB gain), being at the same time very effective at removing interference components. Graphical Abstract The raw scanning-EMG signal (left figure) is processed by the MLSS algorithm in order to remove the artifact interference. Firstly, artifacts are detected from the raw signal, obtaining a validity mask (central figure) that determines the samples that have been contaminated by artifacts. Secondly, a least-squares smoothing procedure in the spatial dimension is applied to the raw signal using the not contaminated samples according to the validity mask. The resulting MLSS-processed scanning-EMG signal (right figure) is clean of artifact interference.

  9. Complexity of line-seru conversion for different scheduling rules and two improved exact algorithms for the multi-objective optimization.

    PubMed

    Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei

    2016-01-01

    Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms.

  10. Optical Coherence Tomography (OCT) Device Independent Intraretinal Layer Segmentation

    PubMed Central

    Ehnes, Alexander; Wenner, Yaroslava; Friedburg, Christoph; Preising, Markus N.; Bowl, Wadim; Sekundo, Walter; zu Bexten, Erdmuthe Meyer; Stieger, Knut; Lorenz, Birgit

    2014-01-01

    Purpose To develop and test an algorithm to segment intraretinal layers irrespectively of the actual Optical Coherence Tomography (OCT) device used. Methods The developed algorithm is based on the graph theory optimization. The algorithm's performance was evaluated against that of three expert graders for unsigned boundary position difference and thickness measurement of a retinal layer group in 50 and 41 B-scans, respectively. Reproducibility of the algorithm was tested in 30 C-scans of 10 healthy subjects each with the Spectralis and the Stratus OCT. Comparability between different devices was evaluated in 84 C-scans (volume or radial scans) obtained from 21 healthy subjects, two scans per subject with the Spectralis OCT, and one scan per subject each with the Stratus OCT and the RTVue-100 OCT. Each C-scan was segmented and the mean thickness for each retinal layer in sections of the early treatment of diabetic retinopathy study (ETDRS) grid was measured. Results The algorithm was able to segment up to 11 intraretinal layers. Measurements with the algorithm were within the 95% confidence interval of a single grader and the difference was smaller than the interindividual difference between the expert graders themselves. The cross-device examination of ETDRS-grid related layer thicknesses highly agreed between the three OCT devices. The algorithm correctly segmented a C-scan of a patient with X-linked retinitis pigmentosa. Conclusions The segmentation software provides device-independent, reliable, and reproducible analysis of intraretinal layers, similar to what is obtained from expert graders. Translational Relevance Potential application of the software includes routine clinical practice and multicenter clinical trials. PMID:24820053

  11. Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters

    NASA Astrophysics Data System (ADS)

    Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada

    2011-01-01

    We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.

  12. Multiscale registration algorithm for alignment of meshes

    NASA Astrophysics Data System (ADS)

    Vadde, Srikanth; Kamarthi, Sagar V.; Gupta, Surendra M.

    2004-03-01

    Taking a multi-resolution approach, this research work proposes an effective algorithm for aligning a pair of scans obtained by scanning an object's surface from two adjacent views. This algorithm first encases each scan in the pair with an array of cubes of equal and fixed size. For each scan in the pair a surrogate scan is created by the centroids of the cubes that encase the scan. The Gaussian curvatures of points across the surrogate scan pair are compared to find the surrogate corresponding points. If the difference between the Gaussian curvatures of any two points on the surrogate scan pair is less than a predetermined threshold, then those two points are accepted as a pair of surrogate corresponding points. The rotation and translation values between the surrogate scan pair are determined by using a set of surrogate corresponding points. Using the same rotation and translation values the original scan pairs are aligned. The resulting registration (or alignment) error is computed to check the accuracy of the scan alignment. When the registration error becomes acceptably small, the algorithm is terminated. Otherwise the above process is continued with cubes of smaller and smaller sizes until the algorithm is terminated. However at each finer resolution the search space for finding the surrogate corresponding points is restricted to the regions in the neighborhood of the surrogate points that were at found at the preceding coarser level. The surrogate corresponding points, as the resolution becomes finer and finer, converge to the true corresponding points on the original scans. This approach offers three main benefits: it improves the chances of finding the true corresponding points on the scans, minimize the adverse effects of noise in the scans, and reduce the computational load for finding the corresponding points.

  13. A Novel Real-Time Reference Key Frame Scan Matching Method.

    PubMed

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-05-07

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.

  14. Scanning technology selection impacts acceptability and usefulness of image-rich content*†

    PubMed Central

    Alpi, Kristine M.; Brown, James C.; Neel, Jennifer A.; Grindem, Carol B.; Linder, Keith E.; Harper, James B.

    2016-01-01

    Objective Clinical and research usefulness of articles can depend on image quality. This study addressed whether scans of figures in black and white (B&W), grayscale, or color, or portable document format (PDF) to tagged image file format (TIFF) conversions as provided by interlibrary loan or document delivery were viewed as acceptable or useful by radiologists or pathologists. Methods Residency coordinators selected eighteen figures from studies from radiology, clinical pathology, and anatomic pathology journals. With original PDF controls, each figure was prepared in three or four experimental conditions: PDF conversion to TIFF, and scans from print in B&W, grayscale, and color. Twelve independent observers indicated whether they could identify the features and whether the image quality was acceptable. They also ranked all the experimental conditions of each figure in terms of usefulness. Results Of 982 assessments of 87 anatomic pathology, 83 clinical pathology, and 77 radiology images, 471 (48%) were unidentifiable. Unidentifiability of originals (4%) and conversions (10%) was low. For scans, unidentifiability ranged from 53% for color, to 74% for grayscale, to 97% for B&W. Of 987 responses about acceptability (n=405), 41% were said to be unacceptable, 97% of B&W, 66% of grayscale, 41% of color, and 1% of conversions. Hypothesized order (original, conversion, color, grayscale, B&W) matched 67% of rankings (n=215). Conclusions PDF to TIFF conversion provided acceptable content. Color images are rarely useful in grayscale (12%) or B&W (less than 1%). Acceptability of grayscale scans of noncolor originals was 52%. Digital originals are needed for most images. Print images in color or grayscale should be scanned using those modalities. PMID:26807048

  15. Full cycle rapid scan EPR deconvolution algorithm.

    PubMed

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Full cycle rapid scan EPR deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan.

  17. A 3D reconstruction algorithm for magneto-acoustic tomography with magnetic induction based on ultrasound transducer characteristics.

    PubMed

    Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-12-21

    In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the reconstructed image. This study can be applied to analyse the effect of the position of the transducer and the scanning geometry on imaging. It may provide a more precise method to reconstruct the conductivity distribution in MAT-MI.

  18. Comparing the incomparable? A systematic review of competing techniques for converting descriptive measures of health status into QALY-weights.

    PubMed

    Mortimer, Duncan; Segal, Leonie

    2008-01-01

    Algorithms for converting descriptive measures of health status into quality-adjusted life year (QALY)--weights are now widely available, and their application in economic evaluation is increasingly commonplace. The objective of this study is to describe and compare existing conversion algorithms and to highlight issues bearing on the derivation and interpretation of the QALY-weights so obtained. Systematic review of algorithms for converting descriptive measures of health status into QALY-weights. The review identified a substantial body of literature comprising 46 derivation studies and 16 studies that provided evidence or commentary on the validity of conversion algorithms. Conversion algorithms were derived using 1 of 4 techniques: 1) transfer to utility regression, 2) response mapping, 3) effect size translation, and 4) "revaluing" outcome measures using preference-based scaling techniques. Although these techniques differ in their methodological/theoretical tradition, data requirements, and ease of derivation and application, the available evidence suggests that the sensitivity and validity of derived QALY-weights may be more dependent on the coverage and sensitivity of measures and the disease area/patient group under evaluation than on the technique used in derivation. Despite the recent proliferation of conversion algorithms, a number of questions bearing on the derivation and interpretation of derived QALY-weights remain unresolved. These unresolved issues suggest directions for future research in this area. In the meantime, analysts seeking guidance in selecting derived QALY-weights should consider the validity and feasibility of each conversion algorithm in the disease area and patient group under evaluation rather than restricting their choice to weights from a particular derivation technique.

  19. PRF Ambiguity Detrmination for Radarsat ScanSAR System

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1998-01-01

    PRF ambiguity is a potential problem for a spaceborne SAR operated at high frequencies. For a strip mode SAR, there were several approaches to solve this problem. This paper, however, addresses PRF ambiguity determination algorithms suitable for a burst mode SAR system such as the Radarsat ScanSAR. The candidate algorithms include the wavelength diversity algorithm, range look cross correlation algorithm, and multi-PRF algorithm.

  20. Scan-Line Methods in Spatial Data Systems

    DTIC Science & Technology

    1990-09-04

    algorithms in detail to show some of the implementation issues. Data Compression Storage and transmission times can be reduced by using compression ...goes through the data . Luckily, there are good one-directional compression algorithms , such as run-length coding 13 in which each scan line can be...independently compressed . These are the algorithms to use in a parallel scan-line system. Data compression is usually only used for long-term storage of

  1. Effect of deformable registration on the dose calculated in radiation therapy planning CT scans of lung cancer patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley

    2015-01-15

    Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps)more » using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration, with the majority of points yielding dose-mapping error <2 Gy (approximately 3% of the total prescribed dose). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, resulting in the smallest errors in mapped dose. Dose differences following registration increased significantly with increasing spatial registration errors, dose, and dose gradient (i.e., SD{sub dose}). This model provides a measurement of the uncertainty in the radiation dose when points are mapped between serial CT scans through deformable registration.« less

  2. Deinterlacing using modular neural network

    NASA Astrophysics Data System (ADS)

    Woo, Dong H.; Eom, Il K.; Kim, Yoo S.

    2004-05-01

    Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.

  3. Short-Scan Fan-Beam Algorithms for Cr

    NASA Astrophysics Data System (ADS)

    Naparstek, Abraham

    1980-06-01

    Several short-scan reconstruction algorithms of the convolution type for fan-beam projections are presented and discussed. Their derivation fran new, exact integral representation formulas is outlined, and the performance of same of these algorithms is demonstrated with the aid of simulation results.

  4. A Novel Real-Time Reference Key Frame Scan Matching Method

    PubMed Central

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-01-01

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285

  5. Estimating Effective Dose of Radiation From Pediatric Cardiac CT Angiography Using a 64-MDCT Scanner: New Conversion Factors Relating Dose-Length Product to Effective Dose.

    PubMed

    Trattner, Sigal; Chelliah, Anjali; Prinsen, Peter; Ruzal-Shapiro, Carrie B; Xu, Yanping; Jambawalikar, Sachin; Amurao, Maxwell; Einstein, Andrew J

    2017-03-01

    The purpose of this study is to determine the conversion factors that enable accurate estimation of the effective dose (ED) used for cardiac 64-MDCT angiography performed for children. Anthropomorphic phantoms representative of 1- and 10-year-old children, with 50 metal oxide semiconductor field-effect transistor dosimeters placed in organs, underwent scanning performed using a 64-MDCT scanner with different routine clinical cardiac scan modes and x-ray tube potentials. Organ doses were used to calculate the ED on the basis of weighting factors published in 1991 in International Commission on Radiological Protection (ICRP) publication 60 and in 2007 in ICRP publication 103. The EDs and the scanner-reported dose-length products were used to determine conversion factors for each scan mode. The effect of infant heart rate on the ED and the conversion factors was also assessed. The mean conversion factors calculated using the current definition of ED that appeared in ICRP publication 103 were as follows: 0.099 mSv · mGy -1 · cm -1 , for the 1-year-old phantom, and 0.049 mSv · mGy -1 · cm -1 , for the 10-year-old phantom. These conversion factors were a mean of 37% higher than the corresponding conversion factors calculated using the older definition of ED that appeared in ICRP publication 60. Varying the heart rate did not influence the ED or the conversion factors. Conversion factors determined using the definition of ED in ICRP publication 103 and cardiac, rather than chest, scan coverage suggest that the radiation doses that children receive from cardiac CT performed using a contemporary 64-MDCT scanner are higher than the radiation doses previously reported when older chest conversion factors were used. Additional up-to-date pediatric cardiac CT conversion factors are required for use with other contemporary CT scanners and patients of different age ranges.

  6. Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar

    PubMed Central

    Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu

    2015-01-01

    Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871

  7. Radiation from CT scans in paediatric trauma patients: Indications, effective dose, and impact on surgical decisions.

    PubMed

    Livingston, Michael H; Igric, Ana; Vogt, Kelly; Parry, Neil; Merritt, Neil H

    2014-01-01

    The purpose of this study was to determine the effective dose of radiation due to computed tomography (CT) scans in paediatric trauma patients at a level 1 Canadian paediatric trauma centre. We also explored the indications and actions taken as a result of these scans. We performed a retrospective review of paediatric trauma patients presenting to our centre from January 1, 2007 to December 31, 2008. All CT scans performed during the initial trauma resuscitation, hospital stay, and 6 months afterwards were included. Effective dose was calculated using the reported dose length product for each scan and conversion factors specific for body region and age of the patient. 157 paediatric trauma patients were identified during the 2-year study period. Mean Injury Severity Score was 22.5 (range 12-75). 133 patients received at least one CT scan. The mean number of scans per patient was 2.6 (range 0-16). Most scans resulted in no further action (56%) or additional imaging (32%). A decision to perform a procedure (2%), surgery (8%), or withdrawal of life support (2%) was less common. The average dose per patient was 13.5mSv, which is 4.5 times the background radiation compared to the general population. CT head was the most commonly performed type of scan and was most likely to be repeated. CT body, defined as a scan of the chest, abdomen, and/or pelvis, was associated with the highest effective dose. CT is a significant source of radiation in paediatric trauma patients. Clinicians should carefully consider the indications for each scan, especially when performing non-resuscitation scans. There is a need for evidence-based treatment algorithms to assist clinicians in selecting appropriate imaging for patients with severe multisystem trauma. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. A new method for solving routing and wavelength assignment problems under inaccurate routing information in optical networks with conversion capability

    NASA Astrophysics Data System (ADS)

    Luo, Yanting; Zhang, Yongjun; Gu, Wanyi

    2009-11-01

    In large dynamic networks it is extremely difficult to maintain accurate routing information on all network nodes. The existing studies have illustrated the impact of imprecise state information on the performance of dynamic routing and wavelength assignment (RWA) algorithms. An algorithm called Bypass Based Optical Routing (BBOR) proposed by Xavier Masip-Bruin et al can reduce the effects of having inaccurate routing information in networks operating under the wavelength-continuity constraint. Then they extended the BBOR mechanism (for convenience it's called EBBOR mechanism below) to be applied to the networks with sparse and limited wavelength conversion. But it only considers the characteristic of wavelength conversion in the step of computing the bypass-paths so that its performance may decline with increasing the degree of wavelength translation (this concept will be explained in the section of introduction again). We will demonstrate the issue through theoretical analysis and introduce a novel algorithm which modifies both the lightpath selection and the bypass-paths computation in comparison to EBBOR algorithm. Simulations show that the Modified EBBOR (MEBBOR) algorithm improves the blocking performance significantly in optical networks with Conversion Capability.

  9. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  10. The development of line-scan image recognition algorithms for the detection of frass on mature tomatoes

    USDA-ARS?s Scientific Manuscript database

    In this research, a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at two wavebands, 664 nm and 690 nm, for co...

  11. RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error.

    PubMed

    Huang, Chengqiang; Yang, Youchang; Wu, Bo; Yu, Weize

    2018-06-01

    The sub-pixel arrangement of the RGBG panel and the image with RGB format are different and the algorithm that converts RGB to RGBG is urgently needed to display an image with RGB arrangement on the RGBG panel. However, the information loss is still large although color fringing artifacts are weakened in the published papers that study this conversion. In this paper, an RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error (EDMSE) is proposed. The main points of innovation include the following: (1) the edge detection is first proposed to distinguish image details with serious color fringing artifacts and image details which are prone to be lost in the process of RGB-RGBG conversion; (2) for image details with serious color fringing artifacts, the weighting factor 0.5 is applied to weaken the color fringing artifacts; and (3) for image details that are prone to be lost in the process of RGB-RGBG conversion, a special mechanism to minimize square error is proposed. The experiment shows that the color fringing artifacts are slightly improved by EDMSE, and the values of MSE of the image processed are 19.6% and 7% smaller than those of the image processed by the direct assignment and weighting factor algorithm, respectively. The proposed algorithm is implemented on a field programmable gate array to enable the image display on the RGBG panel.

  12. SU-E-I-13: Evaluation of Metal Artifact Reduction (MAR) Software On Computed Tomography (CT) Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, V; Kohli, K

    2015-06-15

    Purpose: A new commercially available metal artifact reduction (MAR) software in computed tomography (CT) imaging was evaluated with phantoms in the presence of metals. The goal was to assess the ability of the software to restore the CT number in the vicinity of the metals without impacting the image quality. Methods: A Catphan 504 was scanned with a GE Optima RT 580 CT scanner (GE Healthcare, Milwaukee, WI) and the images were reconstructed with and without the MAR software. Both datasets were analyzed with Image Owl QA software (Image Owl Inc, Greenwich, NY). CT number sensitometry, MTF, low contrast, uniformity,more » noise and spatial accuracy were compared for scans with and without MAR software. In addition, an in-house made phantom was scanned with and without a stainless steel insert at three different locations. The accuracy of the CT number and metal insert dimension were investigated as well. Results: Comparisons between scans with and without MAR algorithm on the Catphan phantom demonstrate similar results for image quality. However, noise was slightly higher for the MAR algorithm. Evaluation of the CT number at various locations of the in-house made phantom was also performed. The baseline HU, obtained from the scan without metal insert, was compared to scans with the stainless steel insert at 3 different locations. The HU difference between the baseline scan versus metal scan was improved when the MAR algorithm was applied. In addition, the physical diameter of the stainless steel rod was over-estimated by the MAR algorithm by 0.9 mm. Conclusion: This work indicates with the presence of metal in CT scans, the MAR algorithm is capable of providing a more accurate CT number without compromising the overall image quality. Future work will include the dosimetric impact on the MAR algorithm.« less

  13. WE-D-18A-04: How Iterative Reconstruction Algorithms Affect the MTFs of Variable-Contrast Targets in CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodge, C.T.; Rong, J.; Dodge, C.W.

    2014-06-15

    Purpose: To determine how filtered back-projection (FBP), adaptive statistical (ASiR), and model based (MBIR) iterative reconstruction algorithms affect the measured modulation transfer functions (MTFs) of variable-contrast targets over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP401 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 24 mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 2.5mm thickness, 0.984 pitch. The images were reconstructed using GE's Standard kernel with FBP; 20%, 40% andmore » 70% ASiR; and MBIR. A task-based MTF (MTFtask) was computed for six cylindrical targets: 2 low-contrast (Polystyrene, LDPE), 2 medium-contrast (Delrin, PMP), and 2 high-contrast (Teflon, air). MTFtask was used to compare the performance of reconstruction algorithms with decreasing CTDIvol from 24mGy, which is currently used in the clinic. Results: For the air target and 75% dose savings (6 mGy), MBIR MTFtask at 5 lp/cm measured 0.24, compared to 0.20 for 70% ASiR and 0.11 for FBP. Overall, for both high-contrast targets, MBIR MTFtask improved with increasing CTDIvol and consistently outperformed ASiR and FBP near the system's Nyquist frequency. Conversely, for Polystyrene at 6 mGy, MBIR (0.10) and 70% ASiR (0.07) MTFtask was lower than for FBP (0.18). For medium and low-contrast targets, FBP remains the best overall algorithm for improved resolution at low CTDIvol (1–6 mGy) levels, whereas MBIR is comparable at higher dose levels (12–24 mGy). Conclusion: MBIR improved the MTF of small, high-contrast targets compared to FBP and ASiR at doses of 50%–12.5% of those currently used in the clinic. However, for imaging low- and mediumcontrast targets, FBP performed the best across all dose levels. For assessing MTF from different reconstruction algorithms, task-based MTF measurements are necessary.« less

  14. A New Sparse Representation Framework for Reconstruction of an Isotropic High Spatial Resolution MR Volume From Orthogonal Anisotropic Resolution Scans.

    PubMed

    Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K

    2017-05-01

    In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.

  15. Automated extraction of subdural electrode grid from post-implant MRI scans for epilepsy surgery

    NASA Astrophysics Data System (ADS)

    Pozdin, Maksym A.; Skrinjar, Oskar

    2005-04-01

    This paper presents an automated algorithm for extraction of Subdural Electrode Grid (SEG) from post-implant MRI scans for epilepsy surgery. Post-implant MRI scans are corrupted by the image artifacts caused by implanted electrodes. The artifacts appear as dark spherical voids and given that the cerebrospinal fluid is also dark in T1-weigthed MRI scans, it is a difficult and time-consuming task to manually locate SEG position relative to brain structures of interest. The proposed algorithm reliably and accurately extracts SEG from post-implant MRI scan, i.e. finds its shape and position relative to brain structures of interest. The algorithm was validated against manually determined electrode locations, and the average error was 1.6mm for the three tested subjects.

  16. Improvement of depth resolution in depth-resolved wavenumber-scanning interferometry using wavenumber-domain least-squares algorithm: comparison and experiment.

    PubMed

    Bai, Yulei; Jia, Quanjie; Zhang, Yun; Huang, Qiquan; Yang, Qiyu; Ye, Shuangli; He, Zhaoshui; Zhou, Yanzhou; Xie, Shengli

    2016-05-01

    It is important to improve the depth resolution in depth-resolved wavenumber-scanning interferometry (DRWSI) owing to the limited range of wavenumber scanning. In this work, a new nonlinear iterative least-squares algorithm called the wavenumber-domain least-squares algorithm (WLSA) is proposed for evaluating the phase of DRWSI. The simulated and experimental results of the Fourier transform (FT), complex-number least-squares algorithm (CNLSA), eigenvalue-decomposition and least-squares algorithm (EDLSA), and WLSA were compared and analyzed. According to the results, the WLSA is less dependent on the initial values, and the depth resolution δz is approximately changed from δz to δz/6. Thus, the WLSA exhibits a better performance than the FT, CNLSA, and EDLSA.

  17. Image reconstruction and scan configurations enabled by optimization-based algorithms in multispectral CT

    NASA Astrophysics Data System (ADS)

    Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan

    2017-11-01

    Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.

  18. Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Sharp, Gregory C.; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges

    2016-05-01

    It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.

  19. Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy.

    PubMed

    Bian, Junguo; Sharp, Gregory C; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges

    2016-05-07

    It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.

  20. Scanning electron microscope fine tuning using four-bar piezoelectric actuated mechanism

    NASA Astrophysics Data System (ADS)

    Hatamleh, Khaled S.; Khasawneh, Qais A.; Al-Ghasem, Adnan; Jaradat, Mohammad A.; Sawaqed, Laith; Al-Shabi, Mohammad

    2018-01-01

    Scanning Electron Microscopes are extensively used for accurate micro/nano images exploring. Several strategies have been proposed to fine tune those microscopes in the past few years. This work presents a new fine tuning strategy of a scanning electron microscope sample table using four bar piezoelectric actuated mechanisms. The introduced paper presents an algorithm to find all possible inverse kinematics solutions of the proposed mechanism. In addition, another algorithm is presented to search for the optimal inverse kinematic solution. Both algorithms are used simultaneously by means of a simulation study to fine tune a scanning electron microscope sample table through a pre-specified circular or linear path of motion. Results of the study shows that, proposed algorithms were able to minimize the power required to drive the piezoelectric actuated mechanism by a ratio of 97.5% for all simulated paths of motion when compared to general non-optimized solution.

  1. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org

    2015-10-15

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less

  2. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.

    PubMed

    Chen, Ming; Yu, Hengyong

    2015-10-01

    This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  3. INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm

    PubMed Central

    Gao, Yanbin; Liu, Shifei; Atia, Mohamed M.; Noureldin, Aboelmagd

    2015-01-01

    This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory. PMID:26389906

  4. Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data

    DOE PAGES

    Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.

    2016-04-06

    An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinatemore » system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.« less

  5. INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm.

    PubMed

    Gao, Yanbin; Liu, Shifei; Atia, Mohamed M; Noureldin, Aboelmagd

    2015-09-15

    This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory.

  6. Distributed Algorithm for Voronoi Partition of Wireless Sensor Networks with a Limited Sensing Range.

    PubMed

    He, Chenlong; Feng, Zuren; Ren, Zhigang

    2018-02-03

    For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham's Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm.

  7. Distributed Algorithm for Voronoi Partition of Wireless Sensor Networks with a Limited Sensing Range

    PubMed Central

    Feng, Zuren; Ren, Zhigang

    2018-01-01

    For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham’s Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm. PMID:29401649

  8. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  9. Computer-based radiological longitudinal evaluation of meningiomas following stereotactic radiosurgery.

    PubMed

    Shimol, Eli Ben; Joskowicz, Leo; Eliahou, Ruth; Shoshan, Yigal

    2018-02-01

    Stereotactic radiosurgery (SRS) is a common treatment for intracranial meningiomas. SRS is planned on a pre-therapy gadolinium-enhanced T1-weighted MRI scan (Gd-T1w MRI) in which the meningioma contours have been delineated. Post-SRS therapy serial Gd-T1w MRI scans are then acquired for longitudinal treatment evaluation. Accurate tumor volume change quantification is required for treatment efficacy evaluation and for treatment continuation. We present a new algorithm for the automatic segmentation and volumetric assessment of meningioma in post-therapy Gd-T1w MRI scans. The inputs are the pre- and post-therapy Gd-T1w MRI scans and the meningioma delineation in the pre-therapy scan. The output is the meningioma delineations and volumes in the post-therapy scan. The algorithm uses the pre-therapy scan and its meningioma delineation to initialize an extended Chan-Vese active contour method and as a strong patient-specific intensity and shape prior for the post-therapy scan meningioma segmentation. The algorithm is automatic, obviates the need for independent tumor localization and segmentation initialization, and incorporates the same tumor delineation criteria in both the pre- and post-therapy scans. Our experimental results on retrospective pre- and post-therapy scans with a total of 32 meningiomas with volume ranges 0.4-26.5 cm[Formula: see text] yield a Dice coefficient of [Formula: see text]% with respect to ground-truth delineations in post-therapy scans created by two clinicians. These results indicate a high correspondence to the ground-truth delineations. Our algorithm yields more reliable and accurate tumor volume change measurements than other stand-alone segmentation methods. It may be a useful tool for quantitative meningioma prognosis evaluation after SRS.

  10. A fast image simulation algorithm for scanning transmission electron microscopy.

    PubMed

    Ophus, Colin

    2017-01-01

    Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.

  11. A fast image simulation algorithm for scanning transmission electron microscopy

    DOE PAGES

    Ophus, Colin

    2017-05-10

    Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. Here, we present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this methodmore » with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.« less

  12. Image reconstruction algorithm for optically stimulated luminescence 2D dosimetry using laser-scanned Al2O3:C and Al2O3:C,Mg films

    NASA Astrophysics Data System (ADS)

    Ahmed, M. F.; Schnell, E.; Ahmad, S.; Yukihara, E. G.

    2016-10-01

    The objective of this work was to develop an image reconstruction algorithm for 2D dosimetry using Al2O3:C and Al2O3:C,Mg optically stimulated luminescence (OSL) films imaged using a laser scanning system. The algorithm takes into account parameters associated with detector properties and the readout system. Pieces of Al2O3:C films (~8 mm  ×  8 mm  ×  125 µm) were irradiated and used to simulate dose distributions with extreme dose gradients (zero and non-zero dose regions). The OSLD film pieces were scanned using a custom-built laser-scanning OSL reader and the data obtained were used to develop and demonstrate a dose reconstruction algorithm. The algorithm includes corrections for: (a) galvo hysteresis, (b) photomultiplier tube (PMT) linearity, (c) phosphorescence, (d) ‘pixel bleeding’ caused by the 35 ms luminescence lifetime of F-centers in Al2O3, (e) geometrical distortion inherent to Galvo scanning system, and (f) position dependence of the light collection efficiency. The algorithm was also applied to 6.0 cm  ×  6.0 cm  ×  125 μm or 10.0 cm  ×  10.0 cm  ×  125 µm Al2O3:C and Al2O3:C,Mg films exposed to megavoltage x-rays (6 MV) and 12C beams (430 MeV u-1). The results obtained using pieces of irradiated films show the ability of the image reconstruction algorithm to correct for pixel bleeding even in the presence of extremely sharp dose gradients. Corrections for geometric distortion and position dependence of light collection efficiency were shown to minimize characteristic limitations of this system design. We also exemplify the application of the algorithm to more clinically relevant 6 MV x-ray beam and a 12C pencil beam, demonstrating the potential for small field dosimetry. The image reconstruction algorithm described here provides the foundation for laser-scanned OSL applied to 2D dosimetry.

  13. Geometry characteristics modeling and process optimization in coaxial laser inside wire cladding

    NASA Astrophysics Data System (ADS)

    Shi, Jianjun; Zhu, Ping; Fu, Geyan; Shi, Shihong

    2018-05-01

    Coaxial laser inside wire cladding method is very promising as it has a very high efficiency and a consistent interaction between the laser and wire. In this paper, the energy and mass conservation law, and the regression algorithm are used together for establishing the mathematical models to study the relationship between the layer geometry characteristics (width, height and cross section area) and process parameters (laser power, scanning velocity and wire feeding speed). At the selected parameter ranges, the predicted values from the models are compared with the experimental measured results, and there is minor error existing, but they reflect the same regularity. From the models, it is seen the width of the cladding layer is proportional to both the laser power and wire feeding speed, while it firstly increases and then decreases with the increasing of the scanning velocity. The height of the cladding layer is proportional to the scanning velocity and feeding speed and inversely proportional to the laser power. The cross section area increases with the increasing of feeding speed and decreasing of scanning velocity. By using the mathematical models, the geometry characteristics of the cladding layer can be predicted by the known process parameters. Conversely, the process parameters can be calculated by the targeted geometry characteristics. The models are also suitable for multi-layer forming process. By using the optimized process parameters calculated from the models, a 45 mm-high thin-wall part is formed with smooth side surfaces.

  14. [Reconstruction of Vehicle-human Crash Accident and Injury Analysis Based on 3D Laser Scanning, Multi-rigid-body Reconstruction and Optimized Genetic Algorithm].

    PubMed

    Sun, J; Wang, T; Li, Z D; Shao, Y; Zhang, Z Y; Feng, H; Zou, D H; Chen, Y J

    2017-12-01

    To reconstruct a vehicle-bicycle-cyclist crash accident and analyse the injuries using 3D laser scanning technology, multi-rigid-body dynamics and optimized genetic algorithm, and to provide biomechanical basis for the forensic identification of death cause. The vehicle was measured by 3D laser scanning technology. The multi-rigid-body models of cyclist, bicycle and vehicle were developed based on the measurements. The value range of optimal variables was set. A multi-objective genetic algorithm and the nondominated sorting genetic algorithm were used to find the optimal solutions, which were compared to the record of the surveillance video around the accident scene. The reconstruction result of laser scanning on vehicle was satisfactory. In the optimal solutions found by optimization method of genetic algorithm, the dynamical behaviours of dummy, bicycle and vehicle corresponded to that recorded by the surveillance video. The injury parameters of dummy were consistent with the situation and position of the real injuries on the cyclist in accident. The motion status before accident, damage process by crash and mechanical analysis on the injury of the victim can be reconstructed using 3D laser scanning technology, multi-rigid-body dynamics and optimized genetic algorithm, which have application value in the identification of injury manner and analysis of death cause in traffic accidents. Copyright© by the Editorial Department of Journal of Forensic Medicine

  15. Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients.

    PubMed

    Mayer, Markus A; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2010-11-08

    Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis.

  16. Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients

    PubMed Central

    Mayer, Markus A.; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2010-01-01

    Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis. PMID:21258556

  17. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gongzhang, R.; Xiao, B.; Lardner, T.

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signalsmore » through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.« less

  18. Exact BPF and FBP algorithms for nonstandard saddle curves.

    PubMed

    Yu, Hengyong; Zhao, Shiying; Ye, Yangbo; Wang, Ge

    2005-11-01

    A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better image quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.

  19. Total body irradiation, toward optimal individual delivery: dose evaluation with metal oxide field effect transistors, thermoluminescence detectors, and a treatment planning system.

    PubMed

    Bloemen-van Gurp, Esther J; Mijnheer, Ben J; Verschueren, Tom A M; Lambin, Philippe

    2007-11-15

    To predict the three-dimensional dose distribution of our total body irradiation technique, using a commercial treatment planning system (TPS). In vivo dosimetry, using metal oxide field effect transistors (MOSFETs) and thermoluminescence detectors (TLDs), was used to verify the calculated dose distributions. A total body computed tomography scan was performed and loaded into our TPS, and a three-dimensional-dose distribution was generated. In vivo dosimetry was performed at five locations on the patient. Entrance and exit dose values were converted to midline doses using conversion factors, previously determined with phantom measurements. The TPS-predicted dose values were compared with the MOSFET and TLD in vivo dose values. The MOSFET and TLD dose values agreed within 3.0% and the MOSFET and TPS data within 0.5%. The convolution algorithm of the TPS, which is routinely applied in the clinic, overestimated the dose in the lung region. Using a superposition algorithm reduced the calculated lung dose by approximately 3%. The dose inhomogeneity, as predicted by the TPS, can be reduced using a simple intensity-modulated radiotherapy technique. The use of a TPS to calculate the dose distributions in individual patients during total body irradiation is strongly recommended. Using a TPS gives good insight of the over- and underdosage in a patient and the influence of patient positioning on dose homogeneity. MOSFETs are suitable for in vivo dosimetry purposes during total body irradiation, when using appropriate conversion factors. The MOSFET, TLD, and TPS results agreed within acceptable margins.

  20. A reconstruction algorithm for helical CT imaging on PI-planes.

    PubMed

    Liang, Hongzhu; Zhang, Cishen; Yan, Ming

    2006-01-01

    In this paper, a Feldkamp type approximate reconstruction algorithm is presented for helical cone-beam Computed Tomography. To effectively suppress artifacts due to large cone angle scanning, it is proposed to reconstruct the object point-wisely on unique customized tilted PI-planes which are close to the data collecting helices of the corresponding points. Such a reconstruction scheme can considerably suppress the artifacts in the cone-angle scanning. Computer simulations show that the proposed algorithm can provide improved imaging performance compared with the existing approximate cone-beam reconstruction algorithms.

  1. WE-G-18A-08: Axial Cone Beam DBPF Reconstruction with Three-Dimensional Weighting and Butterfly Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, S; Wang, W; Tang, X

    2014-06-15

    Purpose: With the major benefit in dealing with data truncation for ROI reconstruction, the algorithm of differentiated backprojection followed by Hilbert filtering (DBPF) is originally derived for image reconstruction from parallel- or fan-beam data. To extend its application for axial CB scan, we proposed the integration of the DBPF algorithm with 3-D weighting. In this work, we further propose the incorporation of Butterfly filtering into the 3-D weighted axial CB-DBPF algorithm and conduct an evaluation to verify its performance. Methods: Given an axial scan, tomographic images are reconstructed by the DBPF algorithm with 3-D weighting, in which streak artifacts existmore » along the direction of Hilbert filtering. Recognizing this orientation-specific behavior, a pair of orthogonal Butterfly filtering is applied on the reconstructed images with the horizontal and vertical Hilbert filtering correspondingly. In addition, the Butterfly filtering can also be utilized for streak artifact suppression in the scenarios wherein only partial scan data with an angular range as small as 270° are available. Results: Preliminary data show that, with the correspondingly applied Butterfly filtering, the streak artifacts existing in the images reconstructed by the 3-D weighted DBPF algorithm can be suppressed to an unnoticeable level. Moreover, the Butterfly filtering also works at the scenarios of partial scan, though the 3-D weighting scheme may have to be dropped because of no sufficient projection data are available. Conclusion: As an algorithmic step, the incorporation of Butterfly filtering enables the DBPF algorithm for CB image reconstruction from data acquired along either a full or partial axial scan.« less

  2. Experiences on developing digital down conversion algorithms using Xilinx system generator

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Yuan, Yuan; Zhao, Lizhi

    2013-07-01

    The Digital Down Conversion (DDC) algorithm is a classical signal processing method which is widely used in radar and communication systems. In this paper, the DDC function is implemented by Xilinx System Generator tool on FPGA. System Generator is an FPGA design tool provided by Xilinx Inc and MathWorks Inc. It is very convenient for programmers to manipulate the design and debug the function, especially for the complex algorithm. Through the developing process of DDC function based on System Generator, the results show that System Generator is a very fast and efficient tool for FPGA design.

  3. A fully automated non-external marker 4D-CT sorting algorithm using a serial cine scanning protocol.

    PubMed

    Carnes, Greg; Gaede, Stewart; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim

    2009-04-07

    Current 4D-CT methods require external marker data to retrospectively sort image data and generate CT volumes. In this work we develop an automated 4D-CT sorting algorithm that performs without the aid of data collected from an external respiratory surrogate. The sorting algorithm requires an overlapping cine scan protocol. The overlapping protocol provides a spatial link between couch positions. Beginning with a starting scan position, images from the adjacent scan position (which spatial match the starting scan position) are selected by maximizing the normalized cross correlation (NCC) of the images at the overlapping slice position. The process was continued by 'daisy chaining' all couch positions using the selected images until an entire 3D volume was produced. The algorithm produced 16 phase volumes to complete a 4D-CT dataset. Additional 4D-CT datasets were also produced using external marker amplitude and phase angle sorting methods. The image quality of the volumes produced by the different methods was quantified by calculating the mean difference of the sorted overlapping slices from adjacent couch positions. The NCC sorted images showed a significant decrease in the mean difference (p < 0.01) for the five patients.

  4. TU-F-BRF-03: Effect of Radiation Therapy Planning Scan Registration On the Dose in Lung Cancer Patient CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunliffe, A; Contee, C; White, B

    Purpose: To characterize the effect of deformable registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60Gy, 2Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pre-therapy (4–75 days) CT scan and a treatment planning scan with an associated dose map calculated in Pinnacle were collected. To establish baseline correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pre-therapy scans were co-registered with planning scans (and associated dose maps)more » using the Plastimatch demons and Fraunhofer MEVIS deformable registration algorithms. Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from both registration algorithms. The absolute difference in planned dose (|ΔD|) between manually and automatically mapped landmark points was calculated. Using regression modeling, |ΔD| was modeled as a function of the distance between manually and automatically matched points (registration error, E), the dose standard deviation (SD-dose) in the eight-pixel neighborhood, and the registration algorithm used. Results: 52–92 landmark point pairs (median: 82) were identified in each patient's scans. Average |ΔD| across patients was 3.66Gy (range: 1.2–7.2Gy). |ΔD| was significantly reduced by 0.53Gy using Plastimatch demons compared with Fraunhofer MEVIS. |ΔD| increased significantly as a function of E (0.39Gy/mm) and SD-dose (2.23Gy/Gy). Conclusion: An average error of <4Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration. Dose differences following registration were significantly increased when the Fraunhofer MEVIS registration algorithm was used, spatial registration errors were larger, and dose gradient was higher (i.e., higher SD-dose). To our knowledge, this is the first study to directly compute dose errors following deformable registration of lung CT scans.« less

  5. Automated coronary artery calcification detection on low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Cham, Matthew D.; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    Coronary artery calcification (CAC) measurement from low-dose CT images can be used to assess the risk of coronary artery disease. A fully automatic algorithm to detect and measure CAC from low-dose non-contrast, non-ECG-gated chest CT scans is presented. Based on the automatically detected CAC, the Agatston score (AS), mass score and volume score were computed. These were compared with scores obtained manually from standard-dose ECG-gated scans and low-dose un-gated scans of the same patient. The automatic algorithm segments the heart region based on other pre-segmented organs to provide a coronary region mask. The mitral valve and aortic valve calcification is identified and excluded. All remaining voxels greater than 180HU within the mask region are considered as CAC candidates. The heart segmentation algorithm was evaluated on 400 non-contrast cases with both low-dose and regular dose CT scans. By visual inspection, 371 (92.8%) of the segmentations were acceptable. The automated CAC detection algorithm was evaluated on 41 low-dose non-contrast CT scans. Manual markings were performed on both low-dose and standard-dose scans for these cases. Using linear regression, the correlation of the automatic AS with the standard-dose manual scores was 0.86; with the low-dose manual scores the correlation was 0.91. Standard risk categories were also computed. The automated method risk category agreed with manual markings of gated scans for 24 cases while 15 cases were 1 category off. For low-dose scans, the automatic method agreed with 33 cases while 7 cases were 1 category off.

  6. Exact BPF and FBP algorithms for nonstandard saddle curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Hengyong; Zhao Shiying; Ye Yangbo

    2005-11-15

    A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better imagemore » quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.« less

  7. The improved Apriori algorithm based on matrix pruning and weight analysis

    NASA Astrophysics Data System (ADS)

    Lang, Zhenhong

    2018-04-01

    This paper uses the matrix compression algorithm and weight analysis algorithm for reference and proposes an improved matrix pruning and weight analysis Apriori algorithm. After the transactional database is scanned for only once, the algorithm will construct the boolean transaction matrix. Through the calculation of one figure in the rows and columns of the matrix, the infrequent item set is pruned, and a new candidate item set is formed. Then, the item's weight and the transaction's weight as well as the weight support for items are calculated, thus the frequent item sets are gained. The experimental result shows that the improved Apriori algorithm not only reduces the number of repeated scans of the database, but also improves the efficiency of data correlation mining.

  8. Application of scanning laser Doppler vibrometry for delamination detection in composite structures

    NASA Astrophysics Data System (ADS)

    Kudela, Pawel; Wandowski, Tomasz; Malinowski, Pawel; Ostachowicz, Wieslaw

    2017-12-01

    In this paper application of scanning laser Doppler vibrometry for delamination detection in composite structures was presented. Delamination detection was based on a guided wave propagation method. In this papers results from numerical and experimental research were presented. In the case of numerical research, the Spectral Element Method (SEM) was utilized, in which a mesh was composed of 3D spectral elements. SEM model included also a piezoelectric transducer. In the experimental research guided waves were excited using the piezoelectric transducer whereas the sensing process was conducted using scanning laser Doppler vibrometer (SLDV). Analysis of guided wave propagation and its interaction with delamination was based on a full wavefield approach. Attention was focused on interactions of guided waves with delamination manifested by A0 mode reflection, A0 mode entrapment, and S0/A0 mode conversion. Delamination was simulated by a teflon insert located between plies of composite material. Results of interaction with symmetrically and nonsymmetrical placed delamination (in respect to the composite sample thickness) were presented. Moreover, the authors investigated different size of delaminations. Damage detection was based on a new signal processing algorithm proposed by the authors. In this approach the weighted RMS was utilized selectively. It means that the summation in RMS formula was performed only for a specially selected time instances. Results for simple composite panels, panel with honeycomb core, and real stiffened composite panel from the aircraft were presented.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogden, K; O’Dwyer, R; Bradford, T

    Purpose: To reduce differences in features calculated from MRI brain scans acquired at different field strengths with or without Gadolinium contrast. Methods: Brain scans were processed for 111 epilepsy patients to extract hippocampus and thalamus features. Scans were acquired on 1.5 T scanners with Gadolinium contrast (group A), 1.5T scanners without Gd (group B), and 3.0 T scanners without Gd (group C). A total of 72 features were extracted. Features were extracted from original scans and from scans where the image pixel values were rescaled to the mean of the hippocampi and thalami values. For each data set, cluster analysismore » was performed on the raw feature set and for feature sets with normalization (conversion to Z scores). Two methods of normalization were used: The first was over all values of a given feature, and the second by normalizing within the patient group membership. The clustering software was configured to produce 3 clusters. Group fractions in each cluster were calculated. Results: For features calculated from both the non-rescaled and rescaled data, cluster membership was identical for both the non-normalized and normalized data sets. Cluster 1 was comprised entirely of Group A data, Cluster 2 contained data from all three groups, and Cluster 3 contained data from only groups 1 and 2. For the categorically normalized data sets there was a more uniform distribution of group data in the three Clusters. A less pronounced effect was seen in the rescaled image data features. Conclusion: Image Rescaling and feature renormalization can have a significant effect on the results of clustering analysis. These effects are also likely to influence the results of supervised machine learning algorithms. It may be possible to partly remove the influence of scanner field strength and the presence of Gadolinium based contrast in feature extraction for radiomics applications.« less

  10. Human abdomen recognition using camera and force sensor in medical robot system for automatic ultrasound scan.

    PubMed

    Bin Mustafa, Ammar Safwan; Ishii, Takashi; Matsunaga, Yoshiki; Nakadate, Ryu; Ishii, Hiroyuki; Ogawa, Kouji; Saito, Akiko; Sugawara, Motoaki; Niki, Kiyomi; Takanishi, Atsuo

    2013-01-01

    Physicians use ultrasound scans to obtain real-time images of internal organs, because such scans are safe and inexpensive. However, people in remote areas face difficulties to be scanned due to aging society and physician's shortage. Hence, it is important to develop an autonomous robotic system to perform remote ultrasound scans. Previously, we developed a robotic system for automatic ultrasound scan focusing on human's liver. In order to make it a completely autonomous system, we present in this paper a way to autonomously localize the epigastric region as the starting position for the automatic ultrasound scan. An image processing algorithm marks the umbilicus and mammary papillae on a digital photograph of the patient's abdomen. Then, we made estimation for the location of the epigastric region using the distances between these landmarks. A supporting algorithm distinguishes rib position from epigastrium using the relationship between force and displacement. We implemented these algorithms with the automatic scanning system into an apparatus: a Mitsubishi Electric's MELFA RV-1 six axis manipulator. Tests on 14 healthy male subjects showed the apparatus located the epigastric region with a success rate of 94%. The results suggest that image recognition was effective in localizing a human body part.

  11. Scan converting video tape recorder

    NASA Technical Reports Server (NTRS)

    Holt, N. I. (Inventor)

    1971-01-01

    A video tape recorder is disclosed of sufficient bandwidth to record monochrome television signals or standard NTSC field sequential color at current European and American standards. The system includes scan conversion means for instantaneous playback at scanning standards different from those at which the recording is being made.

  12. Asteroid detection using a single multi-wavelength CCD scan

    NASA Astrophysics Data System (ADS)

    Melton, Jonathan

    2016-09-01

    Asteroid detection is a topic of great interest due to the possibility of diverting possibly dangerous asteroids or mining potentially lucrative ones. Currently, asteroid detection is generally performed by taking multiple images of the same patch of sky separated by 10-15 minutes, then subtracting the images to find movement. However, this is time consuming because of the need to revisit the same area multiple times per night. This paper describes an algorithm that can detect asteroids using a single CCD camera scan, thus cutting down on the time and cost of an asteroid survey. The algorithm is based on the fact that some telescopes scan the sky at multiple wavelengths with a small time separation between the wavelength components. As a result, an object moving with sufficient speed will appear in different places in different wavelength components of the same image. Using image processing techniques we detect the centroids of points of light in the first component and compare these positions to the centroids in the other components using a nearest neighbor algorithm. The algorithm was used on a test set of 49 images obtained from the Sloan telescope in New Mexico and found 100% of known asteroids with only 3 false positives. This algorithm has the advantage of decreasing the amount of time required to perform an asteroid scan, thus allowing more sky to be scanned in the same amount of time or freeing a telescope for other pursuits.

  13. 3D Maps from Multiple MRI Illustrate Changing Atrophy Patterns as Subjects Progress from MCI to AD

    PubMed Central

    Whitwell, Jennifer L; Przybelski, Scott; Weigand, Stephen D; Knopman, David S; Boeve, Bradley F; Petersen, Ronald C; Jack, Clifford R

    2009-01-01

    Summary Mild cognitive impairment (MCI), particularly the amnestic subtype (aMCI), is considered as a transitional stage between normal aging and a diagnosis of clinically probable Alzheimer's disease (AD). The aMCI construct is particularly useful as it provides an opportunity to assess a clinical stage which in most subjects represents prodromal AD. The aim of this study was to assess the progression of cerebral atrophy over multiple serial MRI during the period from aMCI to conversion to AD. Thirty-three subjects were selected that fulfilled clinical criteria for aMCI and had three serial MRI scans: the first scan approximately three years before conversion to AD, the second scan approximately one year before conversion, and the third scan at the time of conversion from aMCI to AD. A group of 33 healthy controls were age and gender-matched to the study cohort. Voxel-based morphometry (VBM) was used to assess patterns of grey matter atrophy in the aMCI subjects at each time-point compared to the control group. Customized templates and prior probability maps were used to avoid normalization and segmentation bias. The pattern of grey matter loss in the aMCI subject scans that were three years before conversion was focused primarily on the medial temporal lobes, including the amygdala, anterior hippocampus and entorhinal cortex, with some additional involvement of the fusiform gyrus, compared to controls. The extent and magnitude of the cerebral atrophy further progressed by the time the subjects were one year before conversion. At this point atrophy in the temporal lobes spread to include the middle temporal gyrus, and extended into more posterior regions of the temporal lobe to include the entire extent of the hippocampus. The parietal lobe also started to become involved. By the time the subjects had converted to a clinical diagnosis of AD the pattern of grey matter atrophy had become still more widespread with more severe involvement of the medial temporal lobes and the temporoparietal association cortices and, for the first time, substantial involvement of the frontal lobes. This pattern of progression fits well with the Braak and Braak neurofibrillary pathological staging scheme in AD. It suggests that the earliest changes occur in the anterior medial temporal lobe and fusiform gyrus, and that these changes occur at least three years before conversion to AD. These results also suggest that 3-dimensional patterns of grey matter atrophy may help to predict the time to conversion in subjects with aMCI. PMID:17533169

  14. Scanning wind-vector scatterometers with two pencil beams

    NASA Technical Reports Server (NTRS)

    Kirimoto, T.; Moore, R. K.

    1984-01-01

    A scanning pencil-beam scatterometer for ocean windvector determination has potential advantages over the fan-beam systems used and proposed heretofore. The pencil beam permits use of lower transmitter power, and at the same time allows concurrent use of the reflector by a radiometer to correct for atmospheric attenuation and other radiometers for other purposes. The use of dual beams based on the same scanning reflector permits four looks at each cell on the surface, thereby improving accuracy and allowing alias removal. Simulation results for a spaceborne dual-beam scanning scatterometer with a 1-watt radiated power at an orbital altitude of 900 km is described. Two novel algorithms for removing the aliases in the windvector are described, in addition to an adaptation of the conventional maximum likelihood algorithm. The new algorithms are more effective at alias removal than the conventional one. Measurement errors for the wind speed, assuming perfect alias removal, were found to be less than 10%.

  15. An Algorithm to Identify and Localize Suitable Dock Locations from 3-D LiDAR Scans

    DTIC Science & Technology

    2013-05-10

    Locations from 3-D LiDAR Scans 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Graves, Mitchell Robert 5d. PROJECT NUMBER...Ranging ( LiDAR ) scans. A LiDAR sensor is a sensor that collects range images from a rotating array of vertically aligned lasers. Our solution leverages...Algorithm, Dock, Locations, Point Clouds, LiDAR , Identify 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a

  16. Penalized likelihood and multi-objective spatial scans for the detection and inference of irregular clusters

    PubMed Central

    2010-01-01

    Background Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. Results & Discussion We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. Conclusions We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the detection of moderately irregularly shaped clusters. The multi-objective cohesion scan is most effective for the detection of highly irregularly shaped clusters. PMID:21034451

  17. Novel medical image enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  18. Influence of Contrast Media on Bone Mineral Density (BMD) Measurements from Routine Contrast-Enhanced MDCT Datasets using a Phantom-less BMD Measurement Tool.

    PubMed

    Toelly, Andrea; Bardach, Constanze; Weber, Michael; Gong, Rui; Lai, Yanbo; Wang, Pei; Guo, Yulin; Kirschke, Jan; Baum, Thomas; Gruber, Michael

    2017-06-01

    Aim  To evaluate the differences in phantom-less bone mineral density (BMD) measurements in contrast-enhanced routine MDCT scans at different contrast phases, and to develop an algorithm for calculating a reliable BMD value. Materials and Methods  112 postmenopausal women from the age of 40 to 77 years (mean age: 57.31 years; SD 9.61) who underwent a clinically indicated MDCT scan, consisting of an unenhanced, an arterial, and a venous phase, were included. A retrospective analysis of the BMD values of the Th12 to L4 vertebrae in each phase was performed using a commercially available phantom-less measurement tool. Results  The mean BMD value in the unenhanced MDCT scans was 79.76 mg/cm³ (SD 31.20), in the arterial phase it was 85.09 mg/cm³ (SD 31.61), and in the venous phase it was 86.18 mg/cm³ (SD 31.30). A significant difference (p < 0.001) was found between BMD values on unenhanced and contrast-enhanced MDCT scans. There was no significant difference between BMD values in the arterial and venous phases (p = 0.228). The following conversion formulas were calculated using linear regression: unenhanced BMD = -2.287 + 0.964 * [arterial BMD value] and -4.517 + 0.978 * [venous BMD value]. The intrarater agreement of BMD measurements was calculated with an intraclass correlation (ICC) of 0.984 and the interobserver reliability was calculated with an ICC of 0.991. Conclusion  Phantom-less BMD measurements in contrast-enhanced MDCT scans result in increased mean BMD values, but, with the formulas applied in our study, a reliable BMD value can be calculated. However, the mean BMD values did not differ significantly between the arterial and venous phases. Key points   · BMD can be assessed on routine CT scans using a phantom-less tool.. · i. v. contrast agent significantly elevates BMD values measured on routine CT scans.. · BMD values measured in the arterial and venous phase did not differ significantly.. · Conversion formulas were defined for the calculation of a reliable BMD.. · The phantom-less tool showed good reliability and is a promising method.. Citation Format · Toelly A, Bardach C, Weber M et al. Influence of Contrast Media on Bone Mineral Density (BMD) Measurements from Routine Contrast-Enhanced MDCT Datasets using a Phantom-less BMD Measurement Tool. Fortschr Röntgenstr 2017; 189: 537 - 543. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems

    NASA Astrophysics Data System (ADS)

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K.

    2018-01-01

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  <  ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  20. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems.

    PubMed

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K

    2017-12-19

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  <  ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  1. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  2. Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.

    PubMed

    Tang, Shaojie; Tang, Xiangyang

    2016-09-01

    The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.

  3. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    NASA Astrophysics Data System (ADS)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  4. Registration of 3D spectral OCT volumes combining ICP with a graph-based approach

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.; Sonka, Milan

    2012-02-01

    The introduction of spectral Optical Coherence Tomography (OCT) scanners has enabled acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D-OCT is used to detect and manage eye diseases such as glaucoma and age-related macular degeneration. To follow-up patients over time, image registration is a vital tool to enable more precise, quantitative comparison of disease states. In this work we present a 3D registrationmethod based on a two-step approach. In the first step we register both scans in the XY domain using an Iterative Closest Point (ICP) based algorithm. This algorithm is applied to vessel segmentations obtained from the projection image of each scan. The distance minimized in the ICP algorithm includes measurements of the vessel orientation and vessel width to allow for a more robust match. In the second step, a graph-based method is applied to find the optimal translation along the depth axis of the individual A-scans in the volume to match both scans. The cost image used to construct the graph is based on the mean squared error (MSE) between matching A-scans in both images at different translations. We have applied this method to the registration of Optic Nerve Head (ONH) centered 3D-OCT scans of the same patient. First, 10 3D-OCT scans of 5 eyes with glaucoma imaged in vivo were registered for a qualitative evaluation of the algorithm performance. Then, 17 OCT data set pairs of 17 eyes with known deformation were used for quantitative assessment of the method's robustness.

  5. Voice Conversion Using Pitch Shifting Algorithm by Time Stretching with PSOLA and Re-Sampling

    NASA Astrophysics Data System (ADS)

    Mousa, Allam

    2010-01-01

    Voice changing has many applications in the industry and commercial filed. This paper emphasizes voice conversion using a pitch shifting method which depends on detecting the pitch of the signal (fundamental frequency) using Simplified Inverse Filter Tracking (SIFT) and changing it according to the target pitch period using time stretching with Pitch Synchronous Over Lap Add Algorithm (PSOLA), then resampling the signal in order to have the same play rate. The same study was performed to see the effect of voice conversion when some Arabic speech signal is considered. Treatment of certain Arabic voiced vowels and the conversion between male and female speech has shown some expansion or compression in the resulting speech. Comparison in terms of pitch shifting is presented here. Analysis was performed for a single frame and a full segmentation of speech.

  6. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  7. First-order convex feasibility algorithms for x-ray CT

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295

  8. Scanning laser polarimetry using variable corneal compensation in the detection of glaucoma with localized visual field defects.

    PubMed

    Kook, Michael S; Cho, Hyun-soo; Seong, Mincheol; Choi, Jaewan

    2005-11-01

    To evaluate the ability of scanning laser polarimetry parameters and a novel deviation map algorithm to discriminate between healthy and early glaucomatous eyes with localized visual field (VF) defects confined to one hemifield. Prospective case-control study. Seventy glaucomatous eyes with localized VF defects and 66 normal controls. A Humphrey field analyzer 24-2 full-threshold test and scanning laser polarimetry with variable corneal compensation were used. We assessed the sensitivity and specificity of scanning laser polarimetry parameters, sensitivity and cutoff values for scanning laser polarimetry deviation map algorithms at different specificity values (80%, 90%, and 95%) in the detection of glaucoma, and correlations between the algorithms of scanning laser polarimetry and of the pattern deviation derived from Humphrey field analyzer testing. There were significant differences between the glaucoma group and normal subjects in the mean parametric values of the temporal, superior, nasal, inferior, temporal (TSNIT) average, superior average, inferior average, and TSNIT standard deviation (SD) (P<0.05). The sensitivity and specificity of each scanning laser polarimetry variable was as follows: TSNIT, 44.3% (95% confidence interval [CI], 39.8%-49.8%) and 100% (95.4%-100%); superior average, 30% (25.5%-34.5%) and 97% (93.5%-100%); inferior average, 45.7% (42.2%-49.2%) and 100% (95.8%-100%); and TSNIT SD, 30% (25.9%-34.1%) and 97% (93.2%-100%), respectively (when abnormal was defined as P<0.05). Based on nerve fiber indicator cutoff values of > or =30 and > or =51 to indicate glaucoma, sensitivities were 54.3% (50.1%-58.5%) and 10% (6.4%-13.6%), and specificities were 97% (93.2%-100%) and 100% (95.8%-100%), respectively. The range of areas under the receiver operating characteristic curves using the scanning laser polarimetry deviation map algorithm was 0.790 to 0.879. Overall sensitivities combining each probability scale and severity score at 80%, 90%, and 95% specificities were 90.0% (95% CI, 86.4%-93.6%), 71.4% (67.4%-75.4%), and 60.0% (56.2%-63.8%), respectively. There was a statistically significant correlation between the scanning laser polarimetry severity score and the VF severity score (R2 = 0.360, P<0.001). Scanning laser polarimetry parameters may not be sufficiently sensitive to detect glaucomatous patients with localized VF damage. Our algorithm using the scanning laser polarimetry deviation map may enhance the understanding of scanning laser polarimetry printouts in terms of the locality, deviation size, and severity of localized retinal nerve fiber layer defects in eyes with localized VF loss.

  9. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion

    PubMed Central

    Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.

    2016-01-01

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation. PMID:27827836

  10. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    PubMed

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  11. Surface registration technique for close-range mapping applications

    NASA Astrophysics Data System (ADS)

    Habib, Ayman F.; Cheng, Rita W. T.

    2006-08-01

    Close-range mapping applications such as cultural heritage restoration, virtual reality modeling for the entertainment industry, and anatomical feature recognition for medical activities require 3D data that is usually acquired by high resolution close-range laser scanners. Since these datasets are typically captured from different viewpoints and/or at different times, accurate registration is a crucial procedure for 3D modeling of mapped objects. Several registration techniques are available that work directly with the raw laser points or with extracted features from the point cloud. Some examples include the commonly known Iterative Closest Point (ICP) algorithm and a recently proposed technique based on matching spin-images. This research focuses on developing a surface matching algorithm that is based on the Modified Iterated Hough Transform (MIHT) and ICP to register 3D data. The proposed algorithm works directly with the raw 3D laser points and does not assume point-to-point correspondence between two laser scans. The algorithm can simultaneously establish correspondence between two surfaces and estimates the transformation parameters relating them. Experiment with two partially overlapping laser scans of a small object is performed with the proposed algorithm and shows successful registration. A high quality of fit between the two scans is achieved and improvement is found when compared to the results obtained using the spin-image technique. The results demonstrate the feasibility of the proposed algorithm for registering 3D laser scanning data in close-range mapping applications to help with the generation of complete 3D models.

  12. Algorithmic commonalities in the parallel environment

    NASA Technical Reports Server (NTRS)

    Mcanulty, Michael A.; Wainer, Michael S.

    1987-01-01

    The ultimate aim of this project was to analyze procedures from substantially different application areas to discover what is either common or peculiar in the process of conversion to the Massively Parallel Processor (MPP). Three areas were identified: molecular dynamic simulation, production systems (rule systems), and various graphics and vision algorithms. To date, only selected graphics procedures have been investigated. They are the most readily available, and produce the most visible results. These include simple polygon patch rendering, raycasting against a constructive solid geometric model, and stochastic or fractal based textured surface algorithms. Only the simplest of conversion strategies, mapping a major loop to the array, has been investigated so far. It is not entirely satisfactory.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grzetic, S; Weldon, M; Noa, K

    Purpose: This study compares the newly released MaxFOV Revision 1 EFOV reconstruction algorithm for GE RT590 to the older WideView EFOV algorithm. Two radiotherapy overlays from Q-fix and Diacor, are included in our analysis. Hounsfield Units (HU) generated with the WideView algorithm varied in the extended field (beyond 50cm) and the scanned object’s border varied from slice to slice. A validation of HU consistency between the two reconstruction algorithms is performed. Methods: A CatPhan 504 and CIRS062 Electron Density Phantom were scanned on a GE RT590 CT-Simulator. The phantoms were positioned in multiple locations within the scan field of viewmore » so some of the density plugs were outside the 50cm reconstruction circle. Images were reconstructed using both the WideView and MaxFOV algorithms. The HU for each scan were characterized both in average over a volume and in profile. Results: HU values are consistent between the two algorithms. Low-density material will have a slight increase in HU value and high-density material will have a slight decrease in HU value as the distance from the sweet spot increases. Border inconsistencies and shading artifacts are still present with the MaxFOV reconstruction on the Q-fix overlay but not the Diacor overlay (It should be noted that the Q-fix overlay is not currently GE-certified). HU values for water outside the 50cm FOV are within 40HU of reconstructions at the sweet spot of the scanner. CatPhan HU profiles show improvement with the MaxFOV algorithm as it approaches the scanner edge. Conclusion: The new MaxFOV algorithm improves the contour border for objects outside of the standard FOV when using a GE-approved tabletop. Air cavities outside of the standard FOV create inconsistent object borders. HU consistency is within GE specifications and the accuracy of the phantom edge improves. Further adjustments to the algorithm are being investigated by GE.« less

  14. Dual energy CT with one full scan and a second sparse-view scan using structure preserving iterative reconstruction (SPIR)

    NASA Astrophysics Data System (ADS)

    Wang, Tonghe; Zhu, Lei

    2016-09-01

    Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.

  15. Using parallel computing methods to improve log surface defect detection methods

    Treesearch

    R. Edward Thomas; Liya Thomas

    2013-01-01

    Determining the size and location of surface defects is crucial to evaluating the potential yield and value of hardwood logs. Recently a surface defect detection algorithm was developed using the Java language. This algorithm was developed around an earlier laser scanning system that had poor resolution along the length of the log (15 scan lines per foot). A newer...

  16. The impact on CT dose of the variability in tube current modulation technology: a theoretical investigation

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Segars, W. Paul; Samei, Ehsan

    2014-08-01

    Body CT scans are routinely performed using tube-current-modulation (TCM) technology. There is notable variability across CT manufacturers in terms of how TCM technology is implemented. Some manufacturers aim to provide uniform image noise across body regions and patient sizes, whereas others aim to provide lower noise for smaller patients. The purpose of this study was to conduct a theoretical investigation to understand how manufacturer-dependent TCM scheme affects organ dose, and to develop a generic approach for assessing organ dose across TCM schemes. The adult reference female extended cardiac-torso (XCAT) phantom was used for this study. A ray-tracing method was developed to calculate the attenuation of the phantom for a given projection angle based on phantom anatomy, CT system geometry, x-ray energy spectrum, and bowtie filter filtration. The tube current (mA) for a given projection angle was then calculated as a log-linear function of the attenuation along that projection. The slope of this function, termed modulation control strength, α, was varied from 0 to 1 to emulate the variability in TCM technology. Using a validated Monte Carlo program, organ dose was simulated for five α values (α = 0, 0.25, 0.5, 0.75, and 1) in the absence and presence of a realistic system mA limit. Organ dose was further normalized by volume-weighted CT dose index (CTDIvol) to obtain conversion factors (h factors) that are relatively independent of system specifics and scan parameters. For both chest and abdomen-pelvis scans and for 24 radiosensitive organs, organ dose conversion factors varied with α, following second-order polynomial equations. This result suggested the need for α-specific organ dose conversion factors (i.e., conversion factors specific to the modulation scheme used). On the other hand, across the full range of α values, organ dose in a TCM scan could be derived from the conversion factors established for a fixed-mA scan (hFIXED). This was possible by multiplying hFIXED by a revised definition of CTDIvol that accounts for two factors: (a) the tube currents at the location of an organ and (b) the variation in organ volume along the longitudinal direction. This α-generic approach represents an approximation. The error associated with this approximation was evaluated using the α-specific organ dose (i.e., the organ dose obtained by using α-specific mA profiles as inputs into the Monte Carlo simulation) as the reference standard. When the mA profiles were constrained by a realistic system limit, this α-generic approach had errors of less than ~20% for the full range of α values. This was the case for 24 radiosensitive organs in both chest and abdomen-pelvis CT scans with the exception of thyroid in the chest scan and bladder in the abdomen-pelvis scan. For these two organs, the errors were less than ~40%. The results of this theoretical study suggested that knowing the mA modulation profile and the fixed-mA conversion factors, organ dose may be estimated for a TCM scan independent of the specific modulation scheme applied.

  17. Minimal-scan filtered backpropagation algorithms for diffraction tomography.

    PubMed

    Pan, X; Anastasio, M A

    1999-12-01

    The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.

  18. Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering

    PubMed Central

    Tang, Shaojie; Tang, Xiangyang

    2016-01-01

    Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512

  19. TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less

  20. Novel Automated Approach to Predict the Outcome of Laser Peripheral Iridotomy for Primary Angle Closure Suspect Eyes Using Anterior Segment Optical Coherence Tomography.

    PubMed

    Koh, Victor; Swamidoss, Issac Niwas; Aquino, Maria Cecilia D; Chew, Paul T; Sng, Chelvin

    2018-04-27

    Develop an algorithm to predict the success of laser peripheral iridotomy (LPI) in primary angle closure suspect (PACS), using pre-treatment anterior segment optical coherence tomography (ASOCT) scans. A total of 116 eyes with PACS underwent LPI and time-domain ASOCT scans (temporal and nasal cuts) were performed before and 1 month after LPI. All the post-treatment scans were classified to one of the following categories: (a) both angles open, (b) one of two angles open and (c) both angles closed. After LPI, success is defined as one or more angles changed from close to open. In this proposed method, the pre and post-LPI ASOCT scans were registered at the corresponding angles based on similarities between the respective local descriptor features and random sample consensus technique was used to identify the largest consensus set of correspondences between the pre and post-LPI ASOCT scans. Subsequently, features such as correlation co-efficient (CC) and structural similarity index (SSIM) were extracted and correlated with the success of LPI. We included 116 eyes and 91 (78.44%) eyes fulfilled the criteria for success after LPI. Using the CC and SSIM index scores from this training set of ASOCT images, our algorithm showed that the success of LPI in eyes with narrow angles can be predicted with 89.7% accuracy, specificity of 95.2% and sensitivity of 36.4% based on pre-LPI ASOCT scans only. Using pre-LPI ASOCT scans, our proposed algorithm showed good accuracy in predicting the success of LPI for PACS eyes. This fully-automated algorithm could aid decision making in offering LPI as a prophylactic treatment for PACS.

  1. SU-C-206-07: A Practical Sparse View Ultra-Low Dose CT Acquisition Scheme for PET Attenuation Correction in the Extended Scan Field-Of-View

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, J; Fan, J; Gopinatha Pillai, A

    Purpose: To further reduce CT dose, a practical sparse-view acquisition scheme is proposed to provide the same attenuation estimation as higher dose for PET imaging in the extended scan field-of-view. Methods: CT scans are often used for PET attenuation correction and can be acquired at very low CT radiation dose. Low dose techniques often employ low tube voltage/current accompanied with a smooth filter before backprojection to reduce CT image noise. These techniques can introduce bias in the conversion from HU to attenuation values, especially in the extended CT scan field-of-view (FOV). In this work, we propose an ultra-low dose CTmore » technique for PET attenuation correction based on sparse-view acquisition. That is, instead of an acquisition of full amount of views, only a fraction of views are acquired. We tested this technique on a 64-slice GE CT scanner using multiple phantoms. CT scan FOV truncation completion was performed based on the published water-cylinder extrapolation algorithm. A number of continuous views per rotation: 984 (full), 246, 123, 82 and 62 have been tested, corresponding to a CT dose reduction of none, 4x, 8x, 12x and 16x. We also simulated sparse-view acquisition by skipping views from the fully-acquired view data. Results: FBP reconstruction with Q. AC filter on reduced views in the full extended scan field-of-view possesses similar image quality to the reconstruction on acquired full view data. The results showed a further potential for dose reduction compared to the full acquisition, without sacrificing any significant attenuation support to the PET. Conclusion: With the proposed sparse-view method, one can potential achieve at least 2x more CT dose reduction compared to the current Ultra-Low Dose (ULD) PET/CT protocol. A pre-scan based dose modulation scheme can be combined with the above sparse-view approaches, which can even further reduce the CT scan dose during a PET/CT exam.« less

  2. Sub-pixel analysis to support graphic security after scanning at low resolution

    NASA Astrophysics Data System (ADS)

    Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve

    2006-02-01

    Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced by the illegitimate process.

  3. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  4. Wavelength converter placement for different RWA algorithms in wavelength-routed all-optical networks

    NASA Astrophysics Data System (ADS)

    Chu, Xiaowen; Li, Bo; Chlamtac, Imrich

    2002-07-01

    Sparse wavelength conversion and appropriate routing and wavelength assignment (RWA) algorithms are the two key factors in improving the blocking performance in wavelength-routed all-optical networks. It has been shown that the optimal placement of a limited number of wavelength converters in an arbitrary mesh network is an NP complete problem. There have been various heuristic algorithms proposed in the literature, in which most of them assume that a static routing and random wavelength assignment RWA algorithm is employed. However, the existing work shows that fixed-alternate routing and dynamic routing RWA algorithms can achieve much better blocking performance. Our study in this paper further demonstrates that the wavelength converter placement and RWA algorithms are closely related in the sense that a well designed wavelength converter placement mechanism for a particular RWA algorithm might not work well with a different RWA algorithm. Therefore, the wavelength converter placement and the RWA have to be considered jointly. The objective of this paper is to investigate the wavelength converter placement problem under fixed-alternate routing algorithm and least-loaded routing algorithm. Under the fixed-alternate routing algorithm, we propose a heuristic algorithm called Minimum Blocking Probability First (MBPF) algorithm for wavelength converter placement. Under the least-loaded routing algorithm, we propose a heuristic converter placement algorithm called Weighted Maximum Segment Length (WMSL) algorithm. The objective of the converter placement algorithm is to minimize the overall blocking probability. Extensive simulation studies have been carried out over three typical mesh networks, including the 14-node NSFNET, 19-node EON and 38-node CTNET. We observe that the proposed algorithms not only outperform existing wavelength converter placement algorithms by a large margin, but they also can achieve almost the same performance comparing with full wavelength conversion under the same RWA algorithm.

  5. Conversion of wastelands into state ownership for the needs of high-rise construction

    NASA Astrophysics Data System (ADS)

    Ganebnykh, Elena

    2018-03-01

    High-rise construction in big cities faces the problem of land shortage in downtown areas. Audit of economic complexes showed a large volume of wastelands. The conversion of wastelands into state and municipal ownership helps in part to solve the problem of the lack of space for high-rise construction in the urban area in the format of infill construction. The article investigates the problem of the conversion of wastelands into state and municipal ownership. The research revealed no clear algorithm for converting wastelands into state and municipal ownership. To form a unified system for identifying such plots, a universal algorithm was developed to identify and convert ownerless immovable property into state or municipal ownership.

  6. Generalized look-ahead number conversion from signed digit to complement representation with optical logic operations

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Li, Guoqiang

    2001-12-01

    In this paper a generalized look-ahead logic algorithm for number conversion from signed-digit to its complement representation is developed. By properly encoding the signed digits, all the operations are performed by binary logic, and unified logical expressions can be obtained for conversion from modified-signed-digit (MSD) to 2's complement, trinary signed-digit (TSD) to 3's complement, and quaternary signed-digit (QSD) to 4's complement. For optical implementation, a parallel logical array module using electron-trapping device is employed, which is suitable for realizing complex logic functions in the form of sum-of-product. The proposed algorithm and architecture are compatible with a general-purpose optoelectronic computing system.

  7. Validation of the "HAMP" mapping algorithm: a tool for long-term trauma research studies in the conversion of AIS 2005 to AIS 98.

    PubMed

    Adams, Derk; Schreuder, Astrid B; Salottolo, Kristin; Settell, April; Goss, J Richard

    2011-07-01

    There are significant changes in the abbreviated injury scale (AIS) 2005 system, which make it impractical to compare patients coded in AIS version 98 with patients coded in AIS version 2005. Harborview Medical Center created a computer algorithm "Harborview AIS Mapping Program (HAMP)" to automatically convert AIS 2005 to AIS 98 injury codes. The mapping was validated using 6 months of double-coded patient injury records from a Level I Trauma Center. HAMP was used to determine how closely individual AIS and injury severity scores (ISS) were converted from AIS 2005 to AIS 98 versions. The kappa statistic was used to measure the agreement between manually determined codes and HAMP-derived codes. Seven hundred forty-nine patient records were used for validation. For the conversion of AIS codes, the measure of agreement between HAMP and manually determined codes was [kappa] = 0.84 (95% confidence interval, 0.82-0.86). The algorithm errors were smaller in magnitude than the manually determined coding errors. For the conversion of ISS, the agreement between HAMP versus manually determined ISS was [kappa] = 0.81 (95% confidence interval, 0.78-0.84). The HAMP algorithm successfully converted injuries coded in AIS 2005 to AIS 98. This algorithm will be useful when comparing trauma patient clinical data across populations coded in different versions, especially for longitudinal studies.

  8. Investigation of BPF algorithm in cone-beam CT with 2D general trajectories.

    PubMed

    Zou, Jing; Gui, Jianbao; Rong, Junyan; Hu, Zhanli; Zhang, Qiyang; Xia, Dan

    2012-01-01

    A mathematical derivation was conducted to illustrate that exact 3D image reconstruction could be achieved for z-homogeneous phantoms from data acquired with 2D general trajectories using the back projection filtration (BPF) algorithm. The conclusion was verified by computer simulation and experimental result with a circular scanning trajectory. Furthermore, the effect of the non-uniform degree along z-axis of the phantoms on the accuracy of the 3D reconstruction by BPF algorithm was investigated by numerical simulation with a gradual-phantom and a disk-phantom. The preliminary result showed that the performance of BPF algorithm improved with the z-axis homogeneity of the scanned object.

  9. Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates.

    PubMed

    Zhang, Hao; Li, Xianqi; Chen, Yunmei; Park, Jewook; Li, An-Ping; Zhang, X-G

    2017-01-01

    We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a "rubber band" model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data.

  10. Improving angular resolution with Scan-MUSIC algorithm for real complex targets using 35-GHz millimeter-wave radar

    NASA Astrophysics Data System (ADS)

    Ly, Canh

    2004-08-01

    Scan-MUSIC algorithm, developed by the U.S. Army Research Laboratory (ARL), improves angular resolution for target detection with the use of a single rotatable radar scanning the angular region of interest. This algorithm has been adapted and extended from the MUSIC algorithm that has been used for a linear sensor array. Previously, it was shown that the SMUSIC algorithm and a Millimeter Wave radar can be used to resolve two closely spaced point targets that exhibited constructive interference, but not for the targets that exhibited destructive interference. Therefore, there were some limitations of the algorithm for the point targets. In this paper, the SMUSIC algorithm is applied to a problem of resolving real complex scatterer-type targets, which is more useful and of greater practical interest, particular for the future Army radar system. The paper presents results of the angular resolution of the targets, an M60 tank and an M113 Armored Personnel Carrier (APC), that are within the mainlobe of a Κα-band radar antenna. In particular, we applied the algorithm to resolve centroids of the targets that were placed within the beamwidth of the antenna. The collected coherent data using the stepped-frequency radar were compute magnitudely for the SMUSIC calculation. Even though there were significantly different signal returns for different orientations and offsets of the two targets, we resolved those two target centroids when they were as close as about 1/3 of the antenna beamwidth.

  11. Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study

    PubMed Central

    Rudyanto, Rina D.; Kerkstra, Sjoerd; van Rikxoort, Eva M.; Fetita, Catalin; Brillet, Pierre-Yves; Lefevre, Christophe; Xue, Wenzhe; Zhu, Xiangjun; Liang, Jianming; Öksüz, İlkay; Ünay, Devrim; Kadipaşaogandcaron;lu, Kamuran; Estépar, Raúl San José; Ross, James C.; Washko, George R.; Prieto, Juan-Carlos; Hoyos, Marcela Hernández; Orkisz, Maciej; Meine, Hans; Hüllebrand, Markus; Stöcker, Christina; Mir, Fernando Lopez; Naranjo, Valery; Villanueva, Eliseo; Staring, Marius; Xiao, Changyan; Stoel, Berend C.; Fabijanska, Anna; Smistad, Erik; Elster, Anne C.; Lindseth, Frank; Foruzan, Amir Hossein; Kiros, Ryan; Popuri, Karteek; Cobzas, Dana; Jimenez-Carretero, Daniel; Santos, Andres; Ledesma-Carbayo, Maria J.; Helmberger, Michael; Urschler, Martin; Pienn, Michael; Bosboom, Dennis G.H.; Campo, Arantza; Prokop, Mathias; de Jong, Pim A.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate; van Ginneken, Bram

    2016-01-01

    The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases. PMID:25113321

  12. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  13. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  14. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    PubMed

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (p<0.001) for predicting the task being performed within each scan using artifact-cleaned components. The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy compared to the ICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<0.001). Lower classification accuracy occurred when the extracted spatial maps contained more CSF regions (p<0.001). The success of sparse coding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Variable-spot ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Lixiang; Qiu, Keqiang; Fu, Shaojun

    2016-03-01

    This paper introduces a new scheme of ion beam figuring (IBF), or rather variable-spot IBF, which is conducted at a constant scanning velocity with variable-spot ion beam collimated by a variable diaphragm. It aims at improving the reachability and adaptation of the figuring process within the limits of machine dynamics by varying the ion beam spot size instead of the scanning velocity. In contrast to the dwell time algorithm in the conventional IBF, the variable-spot IBF adopts a new algorithm, which consists of the scan path programming and the trajectory optimization using pattern search. In this algorithm, instead of the dwell time, a new concept, integral etching time, is proposed to interpret the process of variable-spot IBF. We conducted simulations to verify its feasibility and practicality. The simulation results indicate the variable-spot IBF is a promising alternative to the conventional approach.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, C; Kamal, H

    Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less

  17. Scalable 3D image conversion and ergonomic evaluation

    NASA Astrophysics Data System (ADS)

    Kishi, Shinsuke; Kim, Sang Hyun; Shibata, Takashi; Kawai, Takashi; Häkkinen, Jukka; Takatalo, Jari; Nyman, Göte

    2008-02-01

    Digital 3D cinema has recently become popular and a number of high-quality 3D films have been produced. However, in contrast with advances in 3D display technology, it has been pointed out that there is a lack of suitable 3D content and content creators. Since 3D display methods and viewing environments vary widely, there is expectation that high-quality content will be multi-purposed. On the other hand, there is increasing interest in the bio-medical effects of image content of various types and there are moves toward international standardization, so 3D content production needs to take into consideration safety and conformity with international guidelines. The aim of the authors' research is to contribute to the production and application of 3D content that is safe and comfortable to watch by developing a scalable 3D conversion technology. In this paper, the authors focus on the process of changing the screen size, examining a conversion algorithm and its effectiveness. The authors evaluated the visual load imposed during the viewing of various 3D content converted by the prototype algorithm as compared with ideal conditions and with content expanded without conversion. Sheffe's paired comparison method was used for evaluation. To examine the effects of screen size reduction on viewers, changes in user impression and experience were elucidated using the IBQ methodology. The results of the evaluation are presented along with a discussion of the effectiveness and potential of the developed scalable 3D conversion algorithm and future research tasks.

  18. PINPIN a-Si:H based structures for X-ray image detection using the laser scanning technique

    NASA Astrophysics Data System (ADS)

    Fernandes, M.; Vygranenko, Y.; Vieira, M.

    2015-05-01

    Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented.

  19. Evaluation of laser ablation crater relief by white light micro interferometer

    NASA Astrophysics Data System (ADS)

    Gurov, Igor; Volkov, Mikhail; Zhukova, Ekaterina; Ivanov, Nikita; Margaryants, Nikita; Potemkin, Andrey; Samokhvalov, Andrey; Shelygina, Svetlana

    2017-06-01

    A multi-view scanning method is suggested to assess a complicated surface relief by white light interferometer. Peculiarities of the method are demonstrated on a special object in the form of quadrangular pyramid cavity, which is formed at measurement of micro-hardness of materials using a hardness gauge. An algorithm of the joint processing of multi-view scanning results is developed that allows recovering correct relief values. Laser ablation craters were studied experimentally, and their relief was recovered using the developed method. It is shown that the multi-view scanning reduces ambiguity when determining the local depth of the laser ablation craters micro relief. Results of experimental studies of the multi-view scanning method and data processing algorithm are presented.

  20. Surface reconstruction from scattered data through pruning of unstructured grids

    NASA Technical Reports Server (NTRS)

    Maksymiuk, C. M.; Merriam, M. L.

    1991-01-01

    This paper describes an algorithm for reconstructing a surface from a randomly digitized object. Scan data (treated as a cloud of points) is first tesselated out to its convex hull using Delaunay triangulation. The line-of-sight between each surface point and the scanning device is traversed, and any tetrahedra which are pierced by it are removed. The remaining tetrahedra form an approximate solid model of the scanned object. Due to the inherently limited resolution of any scan, this algorithm requires two additional procedures to produce a smooth, polyhedral surface: one process removes long, narrow tetrahedra which span indentations in the surface between digitized points; the other smooths sharp edges. The results for a moderately resolved sample body and a highly resolved aircraft are displayed.

  1. On multiple crack identification by ultrasonic scanning

    NASA Astrophysics Data System (ADS)

    Brigante, M.; Sumbatyan, M. A.

    2018-04-01

    The present work develops an approach which reduces operator equations arising in the engineering problems to the problem of minimizing the discrepancy functional. For this minimization, an algorithm of random global search is proposed, which is allied to some genetic algorithms. The efficiency of the method is demonstrated by the solving problem of simultaneous identification of several linear cracks forming an array in an elastic medium by using the circular Ultrasonic scanning.

  2. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans.

    PubMed

    Mendrik, Adriënne M; Vincken, Koen L; Kuijf, Hugo J; Breeuwer, Marcel; Bouvy, Willem H; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Persson, Mikael; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A; Vrooman, Henri A; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65-80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.

  3. MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans

    PubMed Central

    Mendrik, Adriënne M.; Vincken, Koen L.; Kuijf, Hugo J.; Breeuwer, Marcel; Bouvy, Willem H.; de Bresser, Jeroen; Alansary, Amir; de Bruijne, Marleen; Carass, Aaron; El-Baz, Ayman; Jog, Amod; Katyal, Ranveer; Khan, Ali R.; van der Lijn, Fedde; Mahmood, Qaiser; Mukherjee, Ryan; van Opbroek, Annegreet; Paneri, Sahil; Pereira, Sérgio; Rajchl, Martin; Sarikaya, Duygu; Smedby, Örjan; Silva, Carlos A.; Vrooman, Henri A.; Vyas, Saurabh; Wang, Chunliang; Zhao, Liang; Biessels, Geert Jan; Viergever, Max A.

    2015-01-01

    Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65–80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand. PMID:26759553

  4. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  5. An Improved Perturb and Observe Algorithm for Photovoltaic Motion Carriers

    NASA Astrophysics Data System (ADS)

    Peng, Lele; Xu, Wei; Li, Liming; Zheng, Shubin

    2018-03-01

    An improved perturbation and observation algorithm for photovoltaic motion carriers is proposed in this paper. The model of the proposed algorithm is given by using Lambert W function and tangent error method. Moreover, by using matlab and experiment of photovoltaic system, the tracking performance of the proposed algorithm is tested. And the results demonstrate that the improved algorithm has fast tracking speed and high efficiency. Furthermore, the energy conversion efficiency by the improved method has increased by nearly 8.2%.

  6. Code conversion from signed-digit to complement representation based on look-ahead optical logic operations

    NASA Astrophysics Data System (ADS)

    Li, Guoqiang; Qian, Feng

    2001-11-01

    We present, for the first time to our knowledge, a generalized lookahead logic algorithm for number conversion from signed-digit to complement representation. By properly encoding the signed-digits, all the operations are performed by binary logic, and unified logical expressions can be obtained for conversion from modified-signed- digit (MSD) to 2's complement, trinary signed-digit (TSD) to 3's complement, and quarternary signed-digit (QSD) to 4's complement. For optical implementation, a parallel logical array module using an electron-trapping device is employed and experimental results are shown. This optical module is suitable for implementing complex logic functions in the form of the sum of the product. The algorithm and architecture are compatible with a general-purpose optoelectronic computing system.

  7. Implementation of trigonometric function using CORDIC algorithms

    NASA Astrophysics Data System (ADS)

    Mokhtar, A. S. N.; Ayub, M. I.; Ismail, N.; Daud, N. G. Nik

    2018-02-01

    In 1959, Jack E. Volder presents a brand new formula to the real-time solution of the equation raised in navigation system. This new algorithm was the most beneficial replacement of analog navigation system by the digital. The CORDIC (Coordinate Rotation Digital Computer) algorithm are used for the rapid calculation associated with elementary operates like trigonometric function, multiplication, division and logarithm function, and also various conversions such as conversion of rectangular to polar coordinate including the conversion between binary coded information. In this current time CORDIC formula have many applications in the field of communication, signal processing, 3-D graphics, and others. This paper would be presents the trigonometric function implementation by using CORDIC algorithm in rotation mode for circular coordinate system. The CORDIC technique is used in order to generating the output angle between range 0o to 90o and error analysis is concern. The result showed that the average percentage error is about 0.042% at angles between ranges 00 to 900. But the average percentage error rose up to 45% at angle 90o and above. So, this method is very accurate at the 1st quadrant. The mirror properties method is used to find out an angle at 2nd, 3rd and 4th quadrant.

  8. Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation

    PubMed Central

    Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong

    2013-01-01

    Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324

  9. Multiphoton minimal inertia scanning for fast acquisition of neural activity signals

    NASA Astrophysics Data System (ADS)

    Schuck, Renaud; Go, Mary Ann; Garasto, Stefania; Reynolds, Stephanie; Dragotti, Pier Luigi; Schultz, Simon R.

    2018-04-01

    Objective. Multi-photon laser scanning microscopy provides a powerful tool for monitoring the spatiotemporal dynamics of neural circuit activity. It is, however, intrinsically a point scanning technique. Standard raster scanning enables imaging at subcellular resolution; however, acquisition rates are limited by the size of the field of view to be scanned. Recently developed scanning strategies such as travelling salesman scanning (TSS) have been developed to maximize cellular sampling rate by scanning only select regions in the field of view corresponding to locations of interest such as somata. However, such strategies are not optimized for the mechanical properties of galvanometric scanners. We thus aimed to develop a new scanning algorithm which produces minimal inertia trajectories, and compare its performance with existing scanning algorithms. Approach. We describe here the adaptive spiral scanning (SSA) algorithm, which fits a set of near-circular trajectories to the cellular distribution to avoid inertial drifts of galvanometer position. We compare its performance to raster scanning and TSS in terms of cellular sampling frequency and signal-to-noise ratio (SNR). Main Results. Using surrogate neuron spatial position data, we show that SSA acquisition rates are an order of magnitude higher than those for raster scanning and generally exceed those achieved by TSS for neural densities comparable with those found in the cortex. We show that this result also holds true for in vitro hippocampal mouse brain slices bath loaded with the synthetic calcium dye Cal-520 AM. The ability of TSS to ‘park’ the laser on each neuron along the scanning trajectory, however, enables higher SNR than SSA when all targets are precisely scanned. Raster scanning has the highest SNR but at a substantial cost in number of cells scanned. To understand the impact of sampling rate and SNR on functional calcium imaging, we used the Cramér-Rao Bound on evoked calcium traces recorded simultaneously with electrophysiology traces to calculate the lower bound estimate of the spike timing occurrence. Significance. The results show that TSS and SSA achieve comparable accuracy in spike time estimates compared to raster scanning, despite lower SNR. SSA is an easily implementable way for standard multi-photon laser scanning systems to gain temporal precision in the detection of action potentials while scanning hundreds of active cells.

  10. Change detection of medical images using dictionary learning techniques and principal component analysis.

    PubMed

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-07-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of magnetic resonance imaging (MRI) scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are being used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. We present an improved version of the EigenBlockCD algorithm, named the EigenBlockCD-2. The EigenBlockCD-2 algorithm performs an initial global registration and identifies the changes between serial MR images of the brain. Blocks of pixels from a baseline scan are used to train local dictionaries to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between [Formula: see text] and [Formula: see text] norms as two possible similarity measures in the improved EigenBlockCD-2 algorithm. We show the advantages of the [Formula: see text] norm over the [Formula: see text] norm both theoretically and numerically. We also demonstrate the performance of the new EigenBlockCD-2 algorithm for detecting changes of MR images and compare our results with those provided in the recent literature. Experimental results with both simulated and real MRI scans show that our improved EigenBlockCD-2 algorithm outperforms the previous methods. It detects clinical changes while ignoring the changes due to the patient's position and other acquisition artifacts.

  11. USE OF POPULATION VIABILITY ANALYSIS AND RESERVE SELECTION ALGORITHMS IN REGIONAL CONSERVATION PLANS

    EPA Science Inventory

    Current reserve selection algorithms have difficulty evaluating connectivity and other factors
    necessary to conserve wide-ranging species in developing landscapes. Conversely, population viability analyses may incorporate detailed demographic data but often lack sufficient spa...

  12. Extending Three-Dimensional Weighted Cone Beam Filtered Backprojection (CB-FBP) Algorithm for Image Reconstruction in Volumetric CT at Low Helical Pitches

    PubMed Central

    Hsieh, Jiang; Nilsen, Roy A.; McOlash, Scott M.

    2006-01-01

    A three-dimensional (3D) weighted helical cone beam filtered backprojection (CB-FBP) algorithm (namely, original 3D weighted helical CB-FBP algorithm) has already been proposed to reconstruct images from the projection data acquired along a helical trajectory in angular ranges up to [0, 2 π]. However, an overscan is usually employed in the clinic to reconstruct tomographic images with superior noise characteristics at the most challenging anatomic structures, such as head and spine, extremity imaging, and CT angiography as well. To obtain the most achievable noise characteristics or dose efficiency in a helical overscan, we extended the 3D weighted helical CB-FBP algorithm to handle helical pitches that are smaller than 1: 1 (namely extended 3D weighted helical CB-FBP algorithm). By decomposing a helical over scan with an angular range of [0, 2π + Δβ] into a union of full scans corresponding to an angular range of [0, 2π], the extended 3D weighted function is a summation of all 3D weighting functions corresponding to each full scan. An experimental evaluation shows that the extended 3D weighted helical CB-FBP algorithm can improve noise characteristics or dose efficiency of the 3D weighted helical CB-FBP algorithm at a helical pitch smaller than 1: 1, while its reconstruction accuracy and computational efficiency are maintained. It is believed that, such an efficient CB reconstruction algorithm that can provide superior noise characteristics or dose efficiency at low helical pitches may find its extensive applications in CT medical imaging. PMID:23165031

  13. Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates

    PubMed Central

    Zhang, Hao; Li, Xianqi; Park, Jewook; Li, An-Ping

    2017-01-01

    We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a “rubber band” model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data. PMID:29362664

  14. Arterial tree tracking from anatomical landmarks in magnetic resonance angiography scans

    NASA Astrophysics Data System (ADS)

    O'Neil, Alison; Beveridge, Erin; Houston, Graeme; McCormick, Lynne; Poole, Ian

    2014-03-01

    This paper reports on arterial tree tracking in fourteen Contrast Enhanced MRA volumetric scans, given the positions of a predefined set of vascular landmarks, by using the A* algorithm to find the optimal path for each vessel based on voxel intensity and a learnt vascular probability atlas. The algorithm is intended for use in conjunction with an automatic landmark detection step, to enable fully automatic arterial tree tracking. The scan is filtered to give two further images using the top-hat transform with 4mm and 8mm cubic structuring elements. Vessels are then tracked independently on the scan in which the vessel of interest is best enhanced, as determined from knowledge of typical vessel diameter and surrounding structures. A vascular probability atlas modelling expected vessel location and orientation is constructed by non-rigidly registering the training scans to the test scan using a 3D thin plate spline to match landmark correspondences, and employing kernel density estimation with the ground truth center line points to form a probability density distribution. Threshold estimation by histogram analysis is used to segment background from vessel intensities. The A* algorithm is run using a linear cost function constructed from the threshold and the vascular atlas prior. Tracking results are presented for all major arteries excluding those in the upper limbs. An improvement was observed when tracking was informed by contextual information, with particular benefit for peripheral vessels.

  15. A wavefront reconstruction method for 3-D cylindrical subsurface radar imaging.

    PubMed

    Flores-Tapia, Daniel; Thomas, Gabriel; Pistorius, Stephen

    2008-10-01

    In recent years, the use of radar technology has been proposed in a wide range of subsurface imaging applications. Traditionally, linear scan trajectories are used to acquire data in most subsurface radar applications. However, novel applications, such as breast microwave imaging and wood inspection, require the use of nonlinear scan trajectories in order to adjust to the geometry of the scanned area. This paper proposes a novel reconstruction algorithm for subsurface radar data acquired along cylindrical scan trajectories. The spectrum of the collected data is processed in order to locate the spatial origin of the target reflections and remove the spreading of the target reflections which results from the different signal travel times along the scan trajectory. The proposed algorithm was successfully tested using experimental data collected from phantoms that mimic high contrast subsurface radar scenarios, yielding promising results. Practical considerations such as spatial resolution and sampling constraints are discussed and illustrated as well.

  16. Using Information From Prior Satellite Scans to Improve Cloud Detection Near the Day-Night Terminator

    NASA Technical Reports Server (NTRS)

    Yost, Christopher R.; Minnis, Patrick; Trepte, Qing Z.; Palikonda, Rabindra; Ayers, Jeffrey K.; Spangenberg, Doulas A.

    2012-01-01

    With geostationary satellite data it is possible to have a continuous record of diurnal cycles of cloud properties for a large portion of the globe. Daytime cloud property retrieval algorithms are typically superior to nighttime algorithms because daytime methods utilize measurements of reflected solar radiation. However, reflected solar radiation is difficult to accurately model for high solar zenith angles where the amount of incident radiation is small. Clear and cloudy scenes can exhibit very small differences in reflected radiation and threshold-based cloud detection methods have more difficulty setting the proper thresholds for accurate cloud detection. Because top-of-atmosphere radiances are typically more accurately modeled outside the terminator region, information from previous scans can help guide cloud detection near the terminator. This paper presents an algorithm that uses cloud fraction and clear and cloudy infrared brightness temperatures from previous satellite scan times to improve the performance of a threshold-based cloud mask near the terminator. Comparisons of daytime, nighttime, and terminator cloud fraction derived from Geostationary Operational Environmental Satellite (GOES) radiance measurements show that the algorithm greatly reduces the number of false cloud detections and smoothes the transition from the daytime to the nighttime clod detection algorithm. Comparisons with the Geoscience Laser Altimeter System (GLAS) data show that using this algorithm decreases the number of false detections by approximately 20 percentage points.

  17. A single scan skeletonization algorithm: application to medical imaging of trabecular bone

    NASA Astrophysics Data System (ADS)

    Arlicot, Aurore; Amouriq, Yves; Evenou, Pierre; Normand, Nicolas; Guédon, Jean-Pierre

    2010-03-01

    Shape description is an important step in image analysis. The skeleton is used as a simple, compact representation of a shape. A skeleton represents the line centered in the shape and must be homotopic and one point wide. Current skeletonization algorithms compute the skeleton over several image scans, using either thinning algorithms or distance transforms. The principle of thinning is to delete points as one goes along, preserving the topology of the shape. On the other hand, the maxima of the local distance transform identifies the skeleton and is an equivalent way to calculate the medial axis. However, with this method, the skeleton obtained is disconnected so it is required to connect all the points of the medial axis to produce the skeleton. In this study we introduce a translated distance transform and adapt an existing distance driven homotopic algorithm to perform skeletonization with a single scan and thus allow the processing of unbounded images. This method is applied, in our study, on micro scanner images of trabecular bones. We wish to characterize the bone micro architecture in order to quantify bone integrity.

  18. Can genetic algorithms help virus writers reshape their creations and avoid detection?

    NASA Astrophysics Data System (ADS)

    Abu Doush, Iyad; Al-Saleh, Mohammed I.

    2017-11-01

    Different attack and defence techniques have been evolved over time as actions and reactions between black-hat and white-hat communities. Encryption, polymorphism, metamorphism and obfuscation are among the techniques used by the attackers to bypass security controls. On the other hand, pattern matching, algorithmic scanning, emulation and heuristic are used by the defence team. The Antivirus (AV) is a vital security control that is used against a variety of threats. The AV mainly scans data against its database of virus signatures. Basically, it claims a virus if a match is found. This paper seeks to find the minimal possible changes that can be made on the virus so that it will appear normal when scanned by the AV. Brute-force search through all possible changes can be a computationally expensive task. Alternatively, this paper tries to apply a Genetic Algorithm in solving such a problem. Our proposed algorithm is tested on seven different malware instances. The results show that in all the tested malware instances only a small change in each instance was good enough to bypass the AV.

  19. Automated image-based colon cleansing for laxative-free CT colonography computer-aided polyp detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linguraru, Marius George; Panjwani, Neil; Fletcher, Joel G.

    2011-12-15

    Purpose: To evaluate the performance of a computer-aided detection (CAD) system for detecting colonic polyps at noncathartic computed tomography colonography (CTC) in conjunction with an automated image-based colon cleansing algorithm. Methods: An automated colon cleansing algorithm was designed to detect and subtract tagged-stool, accounting for heterogeneity and poor tagging, to be used in conjunction with a colon CAD system. The method is locally adaptive and combines intensity, shape, and texture analysis with probabilistic optimization. CTC data from cathartic-free bowel preparation were acquired for testing and training the parameters. Patients underwent various colonic preparations with barium or Gastroview in divided dosesmore » over 48 h before scanning. No laxatives were administered and no dietary modifications were required. Cases were selected from a polyp-enriched cohort and included scans in which at least 90% of the solid stool was visually estimated to be tagged and each colonic segment was distended in either the prone or supine view. The CAD system was run comparatively with and without the stool subtraction algorithm. Results: The dataset comprised 38 CTC scans from prone and/or supine scans of 19 patients containing 44 polyps larger than 10 mm (22 unique polyps, if matched between prone and supine scans). The results are robust on fine details around folds, thin-stool linings on the colonic wall, near polyps and in large fluid/stool pools. The sensitivity of the CAD system is 70.5% per polyp at a rate of 5.75 false positives/scan without using the stool subtraction module. This detection improved significantly (p = 0.009) after automated colon cleansing on cathartic-free data to 86.4% true positive rate at 5.75 false positives/scan. Conclusions: An automated image-based colon cleansing algorithm designed to overcome the challenges of the noncathartic colon significantly improves the sensitivity of colon CAD by approximately 15%.« less

  20. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee

    2014-03-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.

  1. MOLA II Laser Transmitter Calibration and Performance. 1.2

    NASA Technical Reports Server (NTRS)

    Afzal, Robert S.; Smith, David E. (Technical Monitor)

    1997-01-01

    The goal of the document is to explain the algorithm for determining the laser output energy from the telemetry data within the return packets from MOLA II. A simple algorithm is developed to convert the raw start detector data into laser energy, measured in millijoules. This conversion is dependent on three variables, start detector counts, array heat sink temperature and start detector temperature. All these values are contained within the return packets. The conversion is applied to the GSFC Thermal Vacuum data as well as the in-space data to date and shows good correlation.

  2. Identification of damage in plates using full-field measurement with a continuously scanning laser Doppler vibrometer system

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ming; Xu, Y. F.; Zhu, W. D.

    2018-05-01

    An effective and reliable damage identification method for plates with a continuously scanning laser Doppler vibrometer (CSLDV) system is proposed. A new constant-speed scan algorithm is proposed to create a two-dimensional (2D) scan trajectory and automatically scan a whole plate surface. Full-field measurement of the plate can be achieved by applying the algorithm to the CSLDV system. Based on the new scan algorithm, the demodulation method is extended from one dimension for beams to two dimensions for plates to obtain a full-field operating deflection shape (ODS) of the plate from velocity response measured by the CSLDV system. The full-field ODS of an associated undamaged plate is obtained by using polynomials with proper orders to fit the corresponding full-field ODS from the demodulation method. A curvature damage index (CDI) using differences between curvatures of ODSs (CODSs) associated with ODSs that are obtained by the demodulation method and the polynomial fit is proposed to identify damage. An auxiliary CDI obtained by averaging CDIs at different excitation frequencies is defined to further assist damage identification. An experiment of an aluminum plate with damage in the form of 10.5% thickness reduction in a damage area of 0.86% of the whole scan area is conducted to investigate the proposed method. Six frequencies close to natural frequencies of the plate and one randomly selected frequency are used as sinusoidal excitation frequencies. Two 2D scan trajectories, i.e., a horizontally moving 2D scan trajectory and a vertically moving 2D scan trajectory, are used to obtain ODSs, CODSs, and CDIs of the plate. The damage is successfully identified near areas with consistently high values of CDIs at different excitation frequencies along the two 2D scan trajectories; the damage area is also identified by auxiliary CDIs.

  3. An automatic approach for 3D registration of CT scans

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Saber, Eli; Dianat, Sohail; Vantaram, Sreenath Rao; Abhyankar, Vishwas

    2012-03-01

    CT (Computed tomography) is a widely employed imaging modality in the medical field. Normally, a volume of CT scans is prescribed by a doctor when a specific region of the body (typically neck to groin) is suspected of being abnormal. The doctors are required to make professional diagnoses based upon the obtained datasets. In this paper, we propose an automatic registration algorithm that helps healthcare personnel to automatically align corresponding scans from 'Study' to 'Atlas'. The proposed algorithm is capable of aligning both 'Atlas' and 'Study' into the same resolution through 3D interpolation. After retrieving the scanned slice volume in the 'Study' and the corresponding volume in the original 'Atlas' dataset, a 3D cross correlation method is used to identify and register various body parts.

  4. Rare itemsets mining algorithm based on RP-Tree and spark framework

    NASA Astrophysics Data System (ADS)

    Liu, Sainan; Pan, Haoan

    2018-05-01

    For the issues of the rare itemsets mining in big data, this paper proposed a rare itemsets mining algorithm based on RP-Tree and Spark framework. Firstly, it arranged the data vertically according to the transaction identifier, in order to solve the defects of scan the entire data set, the vertical datasets are divided into frequent vertical datasets and rare vertical datasets. Then, it adopted the RP-Tree algorithm to construct the frequent pattern tree that contains rare items and generate rare 1-itemsets. After that, it calculated the support of the itemsets by scanning the two vertical data sets, finally, it used the iterative process to generate rare itemsets. The experimental show that the algorithm can effectively excavate rare itemsets and have great superiority in execution time.

  5. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  6. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  7. Development of an algorithm for controlling a multilevel three-phase converter

    NASA Astrophysics Data System (ADS)

    Taissariyeva, Kyrmyzy; Ilipbaeva, Lyazzat

    2017-08-01

    This work is devoted to the development of an algorithm for controlling transistors in a three-phase multilevel conversion system. The developed algorithm allows to organize a correct operation and describes the state of transistors at each moment of time when constructing a computer model of a three-phase multilevel converter. The developed algorithm of operation of transistors provides in-phase of a three-phase converter and obtaining a sinusoidal voltage curve at the converter output.

  8. Suppression of motion-induced streak artifacts along chords in fan-beam BPF-reconstructions of motion-contaminated projection data

    NASA Astrophysics Data System (ADS)

    King, Martin; Xia, Dan; Yu, Lifeng; Pan, Xiaochuan; Giger, Maryellen

    2006-03-01

    Usage of the backprojection filtration (BPF) algorithm for reconstructing images from motion-contaminated fan-beam data may result in motion-induced streak artifacts, which appear in the direction of the chords on which images are reconstructed. These streak artifacts, which are most pronounced along chords tangent to the edges of the moving object, may be suppressed by use of the weighted BPF (WBPF) algorithm, which can exploit the inherent redundancies in fan-beam data. More specifically, reconstructions using full-scan and short-scan data can allow for substantial suppression of these streaks, whereas those using reduced-scan data can allow for partial suppression. Since multiple different reconstructions of the same chord can be obtained by varying the amount of redundant data used, we have laid the groundwork for a possible method to characterize the amount of motion encoded within the data used for reconstructing an image on a particular chord. Furthermore, since motion artifacts in WBPF reconstructions using full-scan and short-scan data appear similar to those in corresponding fan-beam filtered backprojection (FFBP) reconstructions for the cases performed in this study, the BPF and WBPF algorithms potentially may be used to arrive at a more fundamental characterization of how motion artifacts appear in FFBP reconstructions.

  9. An Evaluation of PC-Based Optical Character Recognition Systems.

    ERIC Educational Resources Information Center

    Schreier, E. M.; Uslan, M. M.

    1991-01-01

    The review examines six personal computer-based optical character recognition (OCR) systems designed for use by blind and visually impaired people. Considered are OCR components and terms, documentation, scanning and reading, command structure, conversion, unique features, accuracy of recognition, scanning time, speed, and cost. (DB)

  10. A radiative transfer model for sea surface temperature retrieval for the along-track scanning radiometer

    NASA Astrophysics Data System (ADS)

    ZáVody, A. M.; Mutlow, C. T.; Llewellyn-Jones, D. T.

    1995-01-01

    The measurements made by the along-track scanning radiometer are now converted routinely into sea surface temperature (SST). The details of the atmospheric model which had been used for deriving the SST algorithms are given, together with tables of the coefficients in the algorithms for the different SST products. The accuracy of the retrieval under normal conditions and the effect of errors in the model on the retrieved SST are briefly discussed.

  11. Objective Quality Assessment for Color-to-Gray Image Conversion.

    PubMed

    Ma, Kede; Zhao, Tiesong; Zeng, Kai; Wang, Zhou

    2015-12-01

    Color-to-gray (C2G) image conversion is the process of transforming a color image into a grayscale one. Despite its wide usage in real-world applications, little work has been dedicated to compare the performance of C2G conversion algorithms. Subjective evaluation is reliable but is also inconvenient and time consuming. Here, we make one of the first attempts to develop an objective quality model that automatically predicts the perceived quality of C2G converted images. Inspired by the philosophy of the structural similarity index, we propose a C2G structural similarity (C2G-SSIM) index, which evaluates the luminance, contrast, and structure similarities between the reference color image and the C2G converted image. The three components are then combined depending on image type to yield an overall quality measure. Experimental results show that the proposed C2G-SSIM index has close agreement with subjective rankings and significantly outperforms existing objective quality metrics for C2G conversion. To explore the potentials of C2G-SSIM, we further demonstrate its use in two applications: 1) automatic parameter tuning for C2G conversion algorithms and 2) adaptive fusion of C2G converted images.

  12. Analytic image reconstruction from partial data for a single-scan cone-beam CT with scatter correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr

    Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less

  13. Multispectral fluorescence image algorithms for detection of frass on mature tomatoes

    USDA-ARS?s Scientific Manuscript database

    A multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at five wavebands, 515 nm, 640 nm, 664 nm, 690 nm, and 724 nm...

  14. Whole-Body Computed Tomography-Based Body Mass and Body Fat Quantification: A Comparison to Hydrostatic Weighing and Air Displacement Plethysmography.

    PubMed

    Gibby, Jacob T; Njeru, Dennis K; Cvetko, Steve T; Heiny, Eric L; Creer, Andrew R; Gibby, Wendell A

    We correlate and evaluate the accuracy of accepted anthropometric methods of percent body fat (%BF) quantification, namely, hydrostatic weighing (HW) and air displacement plethysmography (ADP), to 2 automatic adipose tissue quantification methods using computed tomography (CT). Twenty volunteer subjects (14 men, 6 women) received head-to-toe CT scans. Hydrostatic weighing and ADP were obtained from 17 and 12 subjects, respectively. The CT data underwent conversion using 2 separate algorithms, namely, the Schneider method and the Beam method, to convert Hounsfield units to their respective tissue densities. The overall mass and %BF of both methods were compared with HW and ADP. When comparing ADP to CT data using the Schneider method and Beam method, correlations were r = 0.9806 and 0.9804, respectively. Paired t tests indicated there were no statistically significant biases. Additionally, observed average differences in %BF between ADP and the Schneider method and the Beam method were 0.38% and 0.77%, respectively. The %BF measured from ADP, the Schneider method, and the Beam method all had significantly higher mean differences when compared with HW (3.05%, 2.32%, and 1.94%, respectively). We have shown that total body mass correlates remarkably well with both the Schneider method and Beam method of mass quantification. Furthermore, %BF calculated with the Schneider method and Beam method CT algorithms correlates remarkably well with ADP. The application of these CT algorithms have utility in further research to accurately stratify risk factors with periorgan, visceral, and subcutaneous types of adipose tissue, and has the potential for significant clinical application.

  15. Quantification of pleural effusion on CT by simple measurement.

    PubMed

    Hazlinger, Martin; Ctvrtlik, Filip; Langova, Katerina; Herman, Miroslav

    2014-01-01

    To find the simplest method for quantifying pleural effusion volume from CT scans. Seventy pleural effusions found on chest CT examination in 50 consecutive adult patients with the presence of free pleural effusion were included. The volume of pleural effusion was calculated from a three-dimensional reconstruction of CT scans. Planar measurements were made on CT scans and their two-dimensional reconstructions in the sagittal plane and at three levels on transversal scans. Individual planar measurements were statistically compared with the detected volume of pleural effusion. Regression equations, averaged absolute difference between observed and predicted values and determination coefficients were found for all measurements and their combinations. A tabular expression of the best single planar measurement was created. The most accurate correlation between the volume and a single planar measurement was found in the dimension measured perpendicular to the parietal pleura on transversal scan with the greatest depth of effusion. Conversion of this measurement to the appropriate volume is possible by regression equation: Volume = 0.365 × b(3) - 4.529 × b(2) + 159.723 × b - 88.377. We devised a simple method of conversion of a single planar measurement on CT scan to the volume of pleural effusion. The tabular expression of our equation can be easily and effectively used in routine practice.

  16. An open library of CT patient projection data

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Leng, Shuai; Yu, Lifeng; Holmes, David; Fletcher, Joel; McCollough, Cynthia

    2016-03-01

    Lack of access to projection data from patient CT scans is a major limitation for development and validation of new reconstruction algorithms. To meet this critical need, we are building a library of CT patient projection data in an open and vendor-neutral format, DICOM-CT-PD, which is an extended DICOM format that contains sinogram data, acquisition geometry, patient information, and pathology identification. The library consists of scans of various types, including head scans, chest scans, abdomen scans, electrocardiogram (ECG)-gated scans, and dual-energy scans. For each scan, three types of data are provided, including DICOM-CT-PD projection data at various dose levels, reconstructed CT images, and a free-form text file. Several instructional documents are provided to help the users extract information from DICOM-CT-PD files, including a dictionary file for the DICOM-CT-PD format, a DICOM-CT-PD reader, and a user manual. Radiologist detection performance based on the reconstructed CT images is also provided. So far 328 head cases, 228 chest cases, and 228 abdomen cases have been collected for potential inclusion. The final library will include a selection of 50 head, chest, and abdomen scans each from at least two different manufacturers, and a few ECG-gated scans and dual-source, dual-energy scans. It will be freely available to academic researchers, and is expected to greatly facilitate the development and validation of CT reconstruction algorithms.

  17. Feedback Circuit among INK4 Tumor Suppressors Constrains Human Glioblastoma Development

    PubMed Central

    Wiedemeyer, Ruprecht; Brennan, Cameron; Heffernan, Timothy P.; Xiao, Yonghong; Mahoney, John; Protopopov, Alexei; Zheng, Hongwu; Bignell, Graham; Furnari, Frank; Cavenee, Webster K.; Hahn, William C.; Ichimura, Koichi; Collins, V. Peter; Chu, Gerald C.; Stratton, Michael R.; Ligon, Keith L.; Futreal, P. Andrew; Chin, Lynda

    2008-01-01

    Summary We have developed a nonheuristic genome topography scan (GTS) algorithm to characterize the patterns of genomic alterations in human glioblastoma (GBM), identifying frequent p18INK4C and p16INK4A codeletion. Functional reconstitution of p18INK4C in GBM cells null for both p16INK4A and p18INK4C resulted in impaired cell-cycle progression and tumorigenic potential. Conversely, RNAi-mediated depletion of p18INK4C in p16INK4A-deficient primary astrocytes or established GBM cells enhanced tumorigenicity in vitro and in vivo. Furthermore, acute suppression of p16INK4A in primary astrocytes induced a concomitant increase in p18INK4C. Together, these findings uncover a feedback regulatory circuit in the astrocytic lineage and demonstrate a bona fide tumor suppressor role for p18INK4C in human GBM wherein it functions cooperatively with other INK4 family members to constrain inappropriate proliferation. PMID:18394558

  18. Calibration for single multi-mode fiber digital scanning microscopy imaging system

    NASA Astrophysics Data System (ADS)

    Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong

    2015-11-01

    Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.

  19. Alternative method for VIIRS Moon in space view process

    NASA Astrophysics Data System (ADS)

    Anderson, Samuel; Chiang, Kwofu V.; Xiong, Xiaoxiong

    2013-09-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) is a radiometric sensing instrument currently operating onboard the Suomi National Polar-orbiting Partnership (S-NPP) spacecraft. It provides high spatial-resolution images of the emitted and reflected radiation from the Earth and its atmosphere in 22 spectral bands (16 moderate resolution bands M1-M16, 5 imaging bands I1-I5, and 1 day/night pan band DNB) spanning the visible and infrared wavelengths from 412 nm to 12 μm. Just prior to each scan it makes of the Earth, the VIIRS instrument makes a measurement of deep space to serve as a background reference. These space view (SV) measurements form a crucial input to the VIIRS calibration process and are a major determinant of its accuracy. On occasion, the orientation of the Suomi NPP spacecraft coincides with the position of the moon in such a fashion that the SV measurements include light from the moon, rendering the SV measurements unusable for calibration. This paper investigates improvements to the existing baseline SV data processing algorithm of the Sensor Data Record (SDR) processing software. The proposed method makes use of a Moon-in-SV detection algorithm that identifies moon-contaminated SV data on a scan-by-scan basis. Use of this algorithm minimizes the number of SV scans that are rejected initially, so that subsequent substitution processes are always able to find alternative substitute SV scans in the near vicinity of detected moon-contaminated scans.

  20. Training-based descreening.

    PubMed

    Siddiqui, Hasib; Bouman, Charles A

    2007-03-01

    Conventional halftoning methods employed in electrophotographic printers tend to produce Moiré artifacts when used for printing images scanned from printed material, such as books and magazines. We present a novel approach for descreening color scanned documents aimed at providing an efficient solution to the Moiré problem in practical imaging devices, including copiers and multifunction printers. The algorithm works by combining two nonlinear image-processing techniques, resolution synthesis-based denoising (RSD), and modified smallest univalue segment assimilating nucleus (SUSAN) filtering. The RSD predictor is based on a stochastic image model whose parameters are optimized beforehand in a separate training procedure. Using the optimized parameters, RSD classifies the local window around the current pixel in the scanned image and applies filters optimized for the selected classes. The output of the RSD predictor is treated as a first-order estimate to the descreened image. The modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and produces the final output of the descreening algorithm. Our method does not require any knowledge of the screening method, such as the screen frequency or dither matrix coefficients, that produced the printed original. The proposed scheme not only suppresses the Moiré artifacts, but, in addition, can be trained with intrinsic sharpening for deblurring scanned documents. Finally, once optimized for a periodic clustered-dot halftoning method, the same algorithm can be used to inverse halftone scanned images containing stochastic error diffusion halftone noise.

  1. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  2. Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions

    DOE PAGES

    Ophus, Colin; Ciston, Jim; Nelson, Chris T.

    2015-12-10

    Unwanted motion of the probe with respect to the sample is a ubiquitous problem in scanning probe and scanning transmission electron microscopies, causing both linear and nonlinear artifacts in experimental images. We have designed a procedure to correct these artifacts by using orthogonal scan pairs to align each measurement line-by-line along the slow scan direction, by fitting contrast variation along the lines. We demonstrate the accuracy of our algorithm on both synthetic and experimental data and provide an implementation of our method.

  3. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry.

    PubMed

    Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.

  4. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry

    PubMed Central

    Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971

  5. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    NASA Astrophysics Data System (ADS)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  6. Basic Geometric Support of Systems for Earth Observation from Geostationary and Highly Elliptical Orbits

    NASA Astrophysics Data System (ADS)

    Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.

    2017-12-01

    A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.

  7. Cardiac motion correction based on partial angle reconstructed images in x-ray CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seungeon; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr

    2015-05-15

    Purpose: Cardiac x-ray CT imaging is still challenging due to heart motion, which cannot be ignored even with the current rotation speed of the equipment. In response, many algorithms have been developed to compensate remaining motion artifacts by estimating the motion using projection data or reconstructed images. In these algorithms, accurate motion estimation is critical to the compensated image quality. In addition, since the scan range is directly related to the radiation dose, it is preferable to minimize the scan range in motion estimation. In this paper, the authors propose a novel motion estimation and compensation algorithm using a sinogrammore » with a rotation angle of less than 360°. The algorithm estimates the motion of the whole heart area using two opposite 3D partial angle reconstructed (PAR) images and compensates the motion in the reconstruction process. Methods: A CT system scans the thoracic area including the heart over an angular range of 180° + α + β, where α and β denote the detector fan angle and an additional partial angle, respectively. The obtained cone-beam projection data are converted into cone-parallel geometry via row-wise fan-to-parallel rebinning. Two conjugate 3D PAR images, whose center projection angles are separated by 180°, are then reconstructed with an angular range of β, which is considerably smaller than a short scan range of 180° + α. Although these images include limited view angle artifacts that disturb accurate motion estimation, they have considerably better temporal resolution than a short scan image. Hence, after preprocessing these artifacts, the authors estimate a motion model during a half rotation for a whole field of view via nonrigid registration between the images. Finally, motion-compensated image reconstruction is performed at a target phase by incorporating the estimated motion model. The target phase is selected as that corresponding to a view angle that is orthogonal to the center view angles of two conjugate PAR images. To evaluate the proposed algorithm, digital XCAT and physical dynamic cardiac phantom datasets are used. The XCAT phantom datasets were generated with heart rates of 70 and 100 bpm, respectively, by assuming a system rotation time of 300 ms. A physical dynamic cardiac phantom was scanned using a slowly rotating XCT system so that the effective heart rate will be 70 bpm for a system rotation speed of 300 ms. Results: In the XCAT phantom experiment, motion-compensated 3D images obtained from the proposed algorithm show coronary arteries with fewer motion artifacts for all phases. Moreover, object boundaries contaminated by motion are well restored. Even though object positions and boundary shapes are still somewhat different from the ground truth in some cases, the authors see that visibilities of coronary arteries are improved noticeably and motion artifacts are reduced considerably. The physical phantom study also shows that the visual quality of motion-compensated images is greatly improved. Conclusions: The authors propose a novel PAR image-based cardiac motion estimation and compensation algorithm. The algorithm requires an angular scan range of less than 360°. The excellent performance of the proposed algorithm is illustrated by using digital XCAT and physical dynamic cardiac phantom datasets.« less

  8. Automatic estimation of detector radial position for contoured SPECT acquisition using CT images on a SPECT/CT system.

    PubMed

    Liu, Ruijie Rachel; Erwin, William D

    2006-08-01

    An algorithm was developed to estimate noncircular orbit (NCO) single-photon emission computed tomography (SPECT) detector radius on a SPECT/CT imaging system using the CT images, for incorporation into collimator resolution modeling for iterative SPECT reconstruction. Simulated male abdominal (arms up), male head and neck (arms down) and female chest (arms down) anthropomorphic phantom, and ten patient, medium-energy SPECT/CT scans were acquired on a hybrid imaging system. The algorithm simulated inward SPECT detector radial motion and object contour detection at each projection angle, employing the calculated average CT image and a fixed Hounsfield unit (HU) threshold. Calculated radii were compared to the observed true radii, and optimal CT threshold values, corresponding to patient bed and clothing surfaces, were found to be between -970 and -950 HU. The algorithm was constrained by the 45 cm CT field-of-view (FOV), which limited the detected radii to < or = 22.5 cm and led to occasional radius underestimation in the case of object truncation by CT. Two methods incorporating the algorithm were implemented: physical model (PM) and best fit (BF). The PM method computed an offset that produced maximum overlap of calculated and true radii for the phantom scans, and applied that offset as a calculated-to-true radius transformation. For the BF method, the calculated-to-true radius transformation was based upon a linear regression between calculated and true radii. For the PM method, a fixed offset of +2.75 cm provided maximum calculated-to-true radius overlap for the phantom study, which accounted for the camera system's object contour detect sensor surface-to-detector face distance. For the BF method, a linear regression of true versus calculated radius from a reference patient scan was used as a calculated-to-true radius transform. Both methods were applied to ten patient scans. For -970 and -950 HU thresholds, the combined overall average root-mean-square (rms) error in radial position for eight patient scans without truncation were 3.37 cm (12.9%) for PM and 1.99 cm (8.6%) for BF, indicating BF is superior to PM in the absence of truncation. For two patient scans with truncation, the rms error was 3.24 cm (12.2%) for PM and 4.10 cm (18.2%) for BF. The slightly better performance of PM in the case of truncation is anomalous, due to FOV edge truncation artifacts in the CT reconstruction, and thus is suspect. The calculated NCO contour for a patient SPECT/CT scan was used with an iterative reconstruction algorithm that incorporated compensation for system resolution. The resulting image was qualitatively superior to the image obtained by reconstructing the data using the fixed radius stored by the scanner. The result was also superior to the image reconstructed using the iterative algorithm provided with the system, which does not incorporate resolution modeling. These results suggest that, under conditions of no or only mild lateral truncation of the CT scan, the algorithm is capable of providing radius estimates suitable for iterative SPECT reconstruction collimator geometric resolution modeling.

  9. [Management of patients with conversion disorder].

    PubMed

    Vermeulen, Marinus; Hoekstra, Jan; Kuipers-van Kooten, Mariëtte J; van der Linden, Els A M

    2014-01-01

    The symptoms of conversion disorder are not due to conscious simulation. There should be no doubt that the symptoms of conversion disorder are genuine, even if scans do not reveal any abnormalities. The management of patients with conversion disorder starts with an explanation of the diagnosis. The essence of this explanation is that patients first hear about what the diagnosis actually means and only after this about what they do not have. When explaining the diagnosis it is a good idea to use metaphors. The treatment of patients with conversion disorder is carried out together with a physical therapist. The collaboration of healthcare professionals who are involved in the treatment of a patient with conversion disorder should preferably be coordinated by the patient's general practitioner.

  10. Anisotropic field-of-view shapes for improved PROPELLER imaging☆

    PubMed Central

    Larson, Peder E.Z.; Lustig, Michael S.; Nishimura, Dwight G.

    2010-01-01

    The Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) method for magnetic resonance imaging data acquisition and reconstruction has the highly desirable property of being able to correct for motion during the scan, making it especially useful for imaging pediatric or uncooperative patients and diffusion imaging. This method nominally supports a circular field of view (FOV), but tailoring the FOV for noncircular shapes results in more efficient, shorter scans. This article presents new algorithms for tailoring PROPELLER acquisitions to the desired FOV shape and size that are flexible and precise. The FOV design also allows for rotational motion which provides better motion correction and reduced aliasing artifacts. Some possible FOV shapes demonstrated are ellipses, ovals and rectangles, and any convex, pi-symmetric shape can be designed. Standard PROPELLER reconstruction is used with minor modifications, and results with simulated motion presented confirm the effectiveness of the motion correction with these modified FOV shapes. These new acquisition design algorithms are simple and fast enough to be computed for each individual scan. Also presented are algorithms for further scan time reductions in PROPELLER echo-planar imaging (EPI) acquisitions by varying the sample spacing in two directions within each blade. PMID:18818039

  11. Omni-Directional Scanning Localization Method of a Mobile Robot Based on Ultrasonic Sensors.

    PubMed

    Mu, Wei-Yi; Zhang, Guang-Peng; Huang, Yu-Mei; Yang, Xin-Gang; Liu, Hong-Yan; Yan, Wen

    2016-12-20

    Improved ranging accuracy is obtained by the development of a novel ultrasonic sensor ranging algorithm, unlike the conventional ranging algorithm, which considers the divergence angle and the incidence angle of the ultrasonic sensor synchronously. An ultrasonic sensor scanning method is developed based on this algorithm for the recognition of an inclined plate and to obtain the localization of the ultrasonic sensor relative to the inclined plate reference frame. The ultrasonic sensor scanning method is then leveraged for the omni-directional localization of a mobile robot, where the ultrasonic sensors are installed on a mobile robot and follow the spin of the robot, the inclined plate is recognized and the position and posture of the robot are acquired with respect to the coordinate system of the inclined plate, realizing the localization of the robot. Finally, the localization method is implemented into an omni-directional scanning localization experiment with the independently researched and developed mobile robot. Localization accuracies of up to ±3.33 mm for the front, up to ±6.21 for the lateral and up to ±0.20° for the posture are obtained, verifying the correctness and effectiveness of the proposed localization method.

  12. Experimental characterization of a direct conversion amorphous selenium detector with thicker conversion layer for dual-energy contrast-enhanced breast imaging.

    PubMed

    Scaduto, David A; Tousignant, Olivier; Zhao, Wei

    2017-08-01

    Dual-energy contrast-enhanced imaging is being investigated as a tool to identify and localize angiogenesis in the breast, a possible indicator of malignant tumors. This imaging technique requires that x-ray images are acquired at energies above the k-shell binding energy of an appropriate radiocontrast agent. Iodinated contrast agents are commonly used for vascular imaging, and require x-ray energies greater than 33 keV. Conventional direct conversion amorphous selenium (a-Se) flat-panel imagers for digital mammography show suboptimal absorption efficiencies at these higher energies. We use spatial-frequency domain image quality metrics to evaluate the performance of a prototype direct conversion flat-panel imager with a thicker a-Se layer, specifically fabricated for dual-energy contrast-enhanced breast imaging. Imaging performance was evaluated in a prototype digital breast tomosynthesis (DBT) system. The spatial resolution, noise characteristics, detective quantum efficiency, and temporal performance of the detector were evaluated for dual-energy imaging for both conventional full-field digital mammography (FFDM) and DBT. The zero-frequency detective quantum efficiency of the prototype detector is improved by approximately 20% over the conventional detector for higher energy beams required for imaging with iodinated contrast agents. The effect of oblique entry of x-rays on spatial resolution does increase with increasing photoconductor thickness, specifically for the most oblique views of a DBT scan. Degradation of spatial resolution due to focal spot motion was also observed. Temporal performance was found to be comparable to conventional mammographic detectors. Increasing the a-Se thickness in direct conversion flat-panel imagers results in better performance for dual-energy contrast-enhanced breast imaging. The reduction in spatial resolution due to oblique entry of x-rays is appreciable in the most extreme clinically relevant cases, but may not profoundly affect reconstructed images due to the algorithms and filters employed. Degradation to projection domain spatial resolution is thus outweighed by the improvement in detective quantum efficiency for high-energy x-rays. © 2017 American Association of Physicists in Medicine.

  13. Applications of multiscale change point detections to monthly stream flow and rainfall in Xijiang River in southern China, part I: correlation and variance

    NASA Astrophysics Data System (ADS)

    Zhu, Yuxiang; Jiang, Jianmin; Huang, Changxing; Chen, Yongqin David; Zhang, Qiang

    2018-04-01

    This article, as part I, introduces three algorithms and applies them to both series of the monthly stream flow and rainfall in Xijiang River, southern China. The three algorithms include (1) normalization of probability distribution, (2) scanning U test for change points in correlation between two time series, and (3) scanning F-test for change points in variances. The normalization algorithm adopts the quantile method to normalize data from a non-normal into the normal probability distribution. The scanning U test and F-test have three common features: grafting the classical statistics onto the wavelet algorithm, adding corrections for independence into each statistic criteria at given confidence respectively, and being almost objective and automatic detection on multiscale time scales. In addition, the coherency analyses between two series are also carried out for changes in variance. The application results show that the changes of the monthly discharge are still controlled by natural precipitation variations in Xijiang's fluvial system. Human activities disturbed the ecological balance perhaps in certain content and in shorter spells but did not violate the natural relationships of correlation and variance changes so far.

  14. Multicast routing for wavelength-routed WDM networks with dynamic membership

    NASA Astrophysics Data System (ADS)

    Huang, Nen-Fu; Liu, Te-Lung; Wang, Yao-Tzung; Li, Bo

    2000-09-01

    Future broadband networks must support integrated services and offer flexible bandwidth usage. In our previous work, we explore the optical link control layer on the top of optical layer that enables the possibility of bandwidth on-demand service directly over wavelength division multiplexed (WDM) networks. Today, more and more applications and services such as video-conferencing software and Virtual LAN service require multicast support over the underlying networks. Currently, it is difficult to provide wavelength multicast over the optical switches without optical/electronic conversions although the conversion takes extra cost. In this paper, based on the proposed wavelength router architecture (equipped with ATM switches to offer O/E and E/O conversions when necessary), a dynamic multicast routing algorithm is proposed to furnish multicast services over WDM networks. The goal is to joint a new group member into the multicast tree so that the cost, including the link cost and the optical/electronic conversion cost, is kept as less as possible. The effectiveness of the proposed wavelength router architecture as well as the dynamic multicast algorithm is evaluated by simulation.

  15. Identifying irregularly shaped crime hot-spots using a multiobjective evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Xiaolan; Grubesic, Tony H.

    2010-12-01

    Spatial cluster detection techniques are widely used in criminology, geography, epidemiology, and other fields. In particular, spatial scan statistics are popular and efficient techniques for detecting areas of elevated crime or disease events. The majority of spatial scan approaches attempt to delineate geographic zones by evaluating the significance of clusters using likelihood ratio statistics tested with the Poisson distribution. While this can be effective, many scan statistics give preference to circular clusters, diminishing their ability to identify elongated and/or irregular shaped clusters. Although adjusting the shape of the scan window can mitigate some of these problems, both the significance of irregular clusters and their spatial structure must be accounted for in a meaningful way. This paper utilizes a multiobjective evolutionary algorithm to find clusters with maximum significance while quantitatively tracking their geographic structure. Crime data for the city of Cincinnati are utilized to demonstrate the advantages of the new approach and highlight its benefits versus more traditional scan statistics.

  16. Hyperspectral data acquisition and analysis in imaging and real-time active MIR backscattering spectroscopy

    NASA Astrophysics Data System (ADS)

    Jarvis, Jan; Haertelt, Marko; Hugger, Stefan; Butschek, Lorenz; Fuchs, Frank; Ostendorf, Ralf; Wagner, Joachim; Beyerer, Juergen

    2017-04-01

    In this work we present data analysis algorithms for detection of hazardous substances in hyperspectral observations acquired using active mid-infrared (MIR) backscattering spectroscopy. We present a novel background extraction algorithm based on the adaptive target generation process proposed by Ren and Chang called the adaptive background generation process (ABGP) that generates a robust and physically meaningful set of background spectra for operation of the well-known adaptive matched subspace detection (AMSD) algorithm. It is shown that the resulting AMSD-ABGP detection algorithm competes well with other widely used detection algorithms. The method is demonstrated in measurement data obtained by two fundamentally different active MIR hyperspectral data acquisition devices. A hyperspectral image sensor applicable in static scenes takes a wavelength sequential approach to hyperspectral data acquisition, whereas a rapid wavelength-scanning single-element detector variant of the same principle uses spatial scanning to generate the hyperspectral observation. It is shown that the measurement timescale of the latter is sufficient for the application of the data analysis algorithms even in dynamic scenarios.

  17. Real time coarse orientation detection in MR scans using multi-planar deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bhatia, Parmeet S.; Reda, Fitsum; Harder, Martin; Zhan, Yiqiang; Zhou, Xiang Sean

    2017-02-01

    Automatically detecting anatomy orientation is an important task in medical image analysis. Specifically, the ability to automatically detect coarse orientation of structures is useful to minimize the effort of fine/accurate orientation detection algorithms, to initialize non-rigid deformable registration algorithms or to align models to target structures in model-based segmentation algorithms. In this work, we present a deep convolution neural network (DCNN)-based method for fast and robust detection of the coarse structure orientation, i.e., the hemi-sphere where the principal axis of a structure lies. That is, our algorithm predicts whether the principal orientation of a structure is in the northern hemisphere or southern hemisphere, which we will refer to as UP and DOWN, respectively, in the remainder of this manuscript. The only assumption of our method is that the entire structure is located within the scan's field-of-view (FOV). To efficiently solve the problem in 3D space, we formulated it as a multi-planar 2D deep learning problem. In the training stage, a large number coronal-sagittal slice pairs are constructed as 2-channel images to train a DCNN to classify whether a scan is UP or DOWN. During testing, we randomly sample a small number of coronal-sagittal 2-channel images and pass them through our trained network. Finally, coarse structure orientation is determined using majority voting. We tested our method on 114 Elbow MR Scans. Experimental results suggest that only five 2-channel images are sufficient to achieve a high success rate of 97.39%. Our method is also extremely fast and takes approximately 50 milliseconds per 3D MR scan. Our method is insensitive to the location of the structure in the FOV.

  18. Plane-Based Registration of Several Thousand Laser Scans on Standard Hardware

    NASA Astrophysics Data System (ADS)

    Wujanz, D.; Schaller, S.; Gielsdorf, F.; Gründig, L.

    2018-05-01

    The automatic registration of terrestrial laser scans appears to be a solved problem in science as well as in practice. However, this assumption is questionable especially in the context of large projects where an object of interest is described by several thousand scans. A critical issue inherently linked to this task is memory management especially if cloud-based registration approaches such as the ICP are being deployed. In order to process even thousands of scans on standard hardware a plane-based registration approach is applied. As a first step planar features are detected within the unregistered scans. This step drastically reduces the amount of data that has to be handled by the hardware. After determination of corresponding planar features a pairwise registration procedure is initiated based on a graph that represents topological relations among all scans. For every feature individual stochastic characteristics are computed that are consequently carried through the algorithm. Finally, a block adjustment is carried out that minimises the residuals between redundantly captured areas. The algorithm is demonstrated on a practical survey campaign featuring a historic town hall. In total, 4853 scans were registered on a standard PC with four processors (3.07 GHz) and 12 GB of RAM.

  19. Postinjection single photon transmission tomography with ordered-subset algorithms for whole-body PET imaging

    NASA Astrophysics Data System (ADS)

    Bai, Chuanyong; Kinahan, P. E.; Brasse, D.; Comtat, C.; Townsend, D. W.

    2002-02-01

    We have evaluated the penalized ordered-subset transmission reconstruction (OSTR) algorithm for postinjection single photon transmission scanning. The OSTR algorithm of Erdogan and Fessler (1999) uses a more accurate model for transmission tomography than ordered-subsets expectation-maximization (OSEM) when OSEM is applied to the logarithm of the transmission data. The OSTR algorithm is directly applicable to postinjection transmission scanning with a single photon source, as emission contamination from the patient mimics the effect, in the original derivation of OSTR, of random coincidence contamination in a positron source transmission scan. Multiple noise realizations of simulated postinjection transmission data were reconstructed using OSTR, filtered backprojection (FBP), and OSEM algorithms. Due to the nonspecific task performance, or multiple uses, of the transmission image, multiple figures of merit were evaluated, including image noise, contrast, uniformity, and root mean square (rms) error. We show that: 1) the use of a three-dimensional (3-D) regularizing image roughness penalty with OSTR improves the tradeoffs in noise, contrast, and rms error relative to the use of a two-dimensional penalty; 2) OSTR with a 3-D penalty has improved tradeoffs in noise, contrast, and rms error relative to FBP or OSEM; and 3) the use of image standard deviation from a single realization to estimate the true noise can be misleading in the case of OSEM. We conclude that using OSTR with a 3-D penalty potentially allows for shorter postinjection transmission scans in single photon transmission tomography in positron emission tomography (PET) relative to FBP or OSEM reconstructed images with the same noise properties. This combination of singles+OSTR is particularly suitable for whole-body PET oncology imaging.

  20. WE-DE-BRA-09: Fast Megavoltage CT Imaging with Rapid Scan Time and Low Imaging Dose in Helical Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magome, T; University of Tokyo Hospital, Tokyo; University of Minnesota, Minneapolis, MN

    Purpose: Megavoltage computed tomography (MVCT) imaging has been widely used for daily patient setup with helical tomotherapy (HT). One drawback of MVCT is its very long imaging time, owing to slow couch speed. The purpose of this study was to develop an MVCT imaging method allowing faster couch speeds, and to assess its accuracy for image guidance for HT. Methods: Three cadavers (mimicking closest physiological and physical system of patients) were scanned four times with couch speeds of 1, 2, 3, and 4 mm/s. The resulting MVCT images were reconstructed using an iterative reconstruction (IR) algorithm. The MVCT images weremore » registered with kilovoltage CT images, and the registration errors were compared with the errors with conventional filtered back projection (FBP) algorithm. Moreover, the fast MVCT imaging was tested in three cases of total marrow irradiation as a clinical trial. Results: Three-dimensional registration errors of the MVCT images reconstructed with the IR algorithm were significantly smaller (p < 0.05) than the errors of images reconstructed with the FBP algorithm at fast couch speeds (3, 4 mm/s). The scan time and imaging dose at a speed of 4 mm/s were reduced to 30% of those from a conventional coarse mode scan. For the patient imaging, a limited number of conventional MVCT (1.2 mm/s) and fast MVCT (3 mm/s) reveals acceptable reduced imaging time and dose able to use for anatomical registration. Conclusion: Fast MVCT with IR algorithm maybe clinically feasible alternative for rapid 3D patient localization. This technique may also be useful for calculating daily dose distributions or organ motion analyses in HT treatment over a wide area.« less

  1. Fast and Robust STEM Reconstruction in Complex Environments Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Wang, D.; Hollaus, M.; Puttonen, E.; Pfeifer, N.

    2016-06-01

    Terrestrial Laser Scanning (TLS) is an effective tool in forest research and management. However, accurate estimation of tree parameters still remains challenging in complex forests. In this paper, we present a novel algorithm for stem modeling in complex environments. This method does not require accurate delineation of stem points from the original point cloud. The stem reconstruction features a self-adaptive cylinder growing scheme. This algorithm is tested for a landslide region in the federal state of Vorarlberg, Austria. The algorithm results are compared with field reference data, which show that our algorithm is able to accurately retrieve the diameter at breast height (DBH) with a root mean square error (RMSE) of ~1.9 cm. This algorithm is further facilitated by applying an advanced sampling technique. Different sampling rates are applied and tested. It is found that a sampling rate of 7.5% is already able to retain the stem fitting quality and simultaneously reduce the computation time significantly by ~88%.

  2. Real-Time Noise Removal for Line-Scanning Hyperspectral Devices Using a Minimum Noise Fraction-Based Approach

    PubMed Central

    Bjorgan, Asgeir; Randeberg, Lise Lyngsnes

    2015-01-01

    Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717

  3. Automatic Classification of Trees from Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2015-08-01

    Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.

  4. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    2000-01-01

    This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

  5. Graph cuts and neural networks for segmentation and porosity quantification in Synchrotron Radiation X-ray μCT of an igneous rock sample.

    PubMed

    Meneses, Anderson Alvarenga de Moura; Palheta, Dayara Bastos; Pinheiro, Christiano Jorge Gomes; Barroso, Regina Cely Rodrigues

    2018-03-01

    X-ray Synchrotron Radiation Micro-Computed Tomography (SR-µCT) allows a better visualization in three dimensions with a higher spatial resolution, contributing for the discovery of aspects that could not be observable through conventional radiography. The automatic segmentation of SR-µCT scans is highly valuable due to its innumerous applications in geological sciences, especially for morphology, typology, and characterization of rocks. For a great number of µCT scan slices, a manual process of segmentation would be impractical, either for the time expended and for the accuracy of results. Aiming the automatic segmentation of SR-µCT geological sample images, we applied and compared Energy Minimization via Graph Cuts (GC) algorithms and Artificial Neural Networks (ANNs), as well as the well-known K-means and Fuzzy C-Means algorithms. The Dice Similarity Coefficient (DSC), Sensitivity and Precision were the metrics used for comparison. Kruskal-Wallis and Dunn's tests were applied and the best methods were the GC algorithms and ANNs (with Levenberg-Marquardt and Bayesian Regularization). For those algorithms, an approximate Dice Similarity Coefficient of 95% was achieved. Our results confirm the possibility of usage of those algorithms for segmentation and posterior quantification of porosity of an igneous rock sample SR-µCT scan. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. SU-E-I-07: Response Characteristics and Signal Conversion Modeling of KV Flat-Panel Detector in Cone Beam CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yu; Cao, Ruifen; Pei, Xi

    2015-06-15

    Purpose: The flat-panel detector response characteristics are investigated to optimize the scanning parameter considering the image quality and less radiation dose. The signal conversion model is also established to predict the tumor shape and physical thickness changes. Methods: With the ELEKTA XVI system, the planar images of 10cm water phantom were obtained under different image acquisition conditions, including tube voltage, electric current, exposure time and frames. The averaged responses of square area in center were analyzed using Origin8.0. The response characteristics for each scanning parameter were depicted by different fitting types. The transmission measured for 10cm water was compared tomore » Monte Carlo simulation. Using the quadratic calibration method, a series of variable-thickness water phantoms images were acquired to derive the signal conversion model. A 20cm wedge water phantom with 2cm step thickness was used to verify the model. At last, the stability and reproducibility of the model were explored during a four week period. Results: The gray values of image center all decreased with the increase of different image acquisition parameter presets. The fitting types adopted were linear fitting, quadratic polynomial fitting, Gauss fitting and logarithmic fitting with the fitting R-Square 0.992, 0.995, 0.997 and 0.996 respectively. For 10cm water phantom, the transmission measured showed better uniformity than Monte Carlo simulation. The wedge phantom experiment show that the radiological thickness changes prediction error was in the range of (-4mm, 5mm). The signal conversion model remained consistent over a period of four weeks. Conclusion: The flat-panel response decrease with the increase of different scanning parameters. The preferred scanning parameter combination was 100kV, 10mA, 10ms, 15frames. It is suggested that the signal conversion model could effectively be used for tumor shape change and radiological thickness prediction. Supported by National Natural Science Foundation of China (81101132, 11305203) and Natural Science Foundation of Anhui Province (11040606Q55, 1308085QH138)« less

  7. An improved principal component analysis based region matching method for fringe direction estimation

    NASA Astrophysics Data System (ADS)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  8. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  9. The GPM Common Calibrated Brightness Temperature Product

    NASA Technical Reports Server (NTRS)

    Stout, John; Berg, Wesley; Huffman, George; Kummerow, Chris; Stocker, Erich

    2005-01-01

    The Global Precipitation Measurement (GPM) project will provide a core satellite carrying the GPM Microwave Imager (GMI) and will use microwave observations from a constellation of other satellites. Each partner with a satellite in the constellation will have a calibration that meets their own requirements and will decide on the format to archive their brightness temperature (Tb) record in GPM. However, GPM multi-sensor precipitation algorithms need to input intercalibrated Tb's in order to avoid differences among sensors introducing artifacts into the longer term climate record of precipitation. The GPM Common Calibrated Brightness Temperature Product is intended to address this problem by providing intercalibrated Tb data, called "Tc" data, where the "c" stands for common. The precipitation algorithms require a Tc file format that is both generic and flexible enough to accommodate the different passive microwave instruments. The format will provide detailed information on the processing history in order to allow future researchers to have a record of what was done. The format will be simple, including the main items of scan time, latitude, longitude, and Tc. It will also provide spacecraft orientation, spacecraft location, orbit, and instrument scan type (cross-track or conical). Another simplification is to store data in real numbers, avoiding the ambiguity of scaled data. Finally, units and descriptions will be provided in the product. The format is built on the concept of a swath, which is a series of scans that have common geolocation and common scan geometry. Scan geometry includes pixels per scan, sensor orientation, scan type, and incidence angles. The Tc algorithm and data format are being tested using the pre-GPM Precipitation Processing System (PPS) software to generate formats and 1/0 routines. In the test, data from SSM/I, TMI, AMSR-E, and WindSat are being processed and written as Tc products.

  10. Combined endeavor of Neutrosophic Set and Chan-Vese model to extract accurate liver image from CT scan.

    PubMed

    Siri, Sangeeta K; Latte, Mrityunjaya V

    2017-11-01

    Many different diseases can occur in the liver, including infections such as hepatitis, cirrhosis, cancer and over effect of medication or toxins. The foremost stage for computer-aided diagnosis of liver is the identification of liver region. Liver segmentation algorithms extract liver image from scan images which helps in virtual surgery simulation, speedup the diagnosis, accurate investigation and surgery planning. The existing liver segmentation algorithms try to extort exact liver image from abdominal Computed Tomography (CT) scan images. It is an open problem because of ambiguous boundaries, large variation in intensity distribution, variability of liver geometry from patient to patient and presence of noise. A novel approach is proposed to meet challenges in extracting the exact liver image from abdominal CT scan images. The proposed approach consists of three phases: (1) Pre-processing (2) CT scan image transformation to Neutrosophic Set (NS) and (3) Post-processing. In pre-processing, the noise is removed by median filter. The "new structure" is designed to transform a CT scan image into neutrosophic domain which is expressed using three membership subset: True subset (T), False subset (F) and Indeterminacy subset (I). This transform approximately extracts the liver image structure. In post processing phase, morphological operation is performed on indeterminacy subset (I) and apply Chan-Vese (C-V) model with detection of initial contour within liver without user intervention. This resulted in liver boundary identification with high accuracy. Experiments show that, the proposed method is effective, robust and comparable with existing algorithm for liver segmentation of CT scan images. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Techniques used for the analysis of oculometer eye-scanning data obtained from an air traffic control display

    NASA Technical Reports Server (NTRS)

    Crawford, Daniel J.; Burdette, Daniel W.; Capron, William R.

    1993-01-01

    The methodology and techniques used to collect and analyze look-point position data from a real-time ATC display-format comparison experiment are documented. That study compared the delivery precision and controller workload of three final approach spacing aid display formats. Using an oculometer, controller lookpoint position data were collected, associated with gaze objects (e.g., moving aircraft) on the ATC display, and analyzed to determine eye-scan behavior. The equipment involved and algorithms for saving, synchronizing with the ATC simulation output, and filtering the data are described. Target (gaze object) and cross-check scanning identification algorithms are also presented. Data tables are provided of total dwell times, average dwell times, and cross-check scans. Flow charts, block diagrams, file record descriptors, and source code are included. The techniques and data presented are intended to benefit researchers in other studies that incorporate non-stationary gaze objects and oculometer equipment.

  12. Generic simulation of multi-element ladar scanner kinematics in USU LadarSIM

    NASA Astrophysics Data System (ADS)

    Omer, David; Call, Benjamin; Pack, Robert; Fullmer, Rees

    2006-05-01

    This paper presents a generic simulation model for a ladar scanner with up to three scan elements, each having a steering, stabilization and/or pattern-scanning role. Of interest is the development of algorithms that automatically generate commands to the scan elements given beam-steering objectives out of the ladar aperture, and the base motion of the sensor platform. First, a straight-forward single-element body-fixed beam-steering methodology is presented. Then a unique multi-element redirective and reflective space-fixed beam-steering methodology is explained. It is shown that standard direction cosine matrix decomposition methods fail when using two orthogonal, space-fixed rotations, thus demanding the development of a new algorithm for beam steering. Finally, a related steering control methodology is presented that uses two separate optical elements mathematically combined to determine the necessary scan element commands. Limits, restrictions, and results on this methodology are presented.

  13. Development and Validation of an Algorithm to Identify Patients with Multiple Myeloma Using Administrative Claims Data.

    PubMed

    Princic, Nicole; Gregory, Chris; Willson, Tina; Mahue, Maya; Felici, Diana; Werther, Winifred; Lenhart, Gregory; Foley, Kathleen A

    2016-01-01

    The objective was to expand on prior work by developing and validating a new algorithm to identify multiple myeloma (MM) patients in administrative claims. Two files were constructed to select MM cases from MarketScan Oncology Electronic Medical Records (EMR) and controls from the MarketScan Primary Care EMR during January 1, 2000-March 31, 2014. Patients were linked to MarketScan claims databases, and files were merged. Eligible cases were age ≥18, had a diagnosis and visit for MM in the Oncology EMR, and were continuously enrolled in claims for ≥90 days preceding and ≥30 days after diagnosis. Controls were age ≥18, had ≥12 months of overlap in claims enrollment (observation period) in the Primary Care EMR and ≥1 claim with an ICD-9-CM diagnosis code of MM (203.0×) during that time. Controls were excluded if they had chemotherapy; stem cell transplant; or text documentation of MM in the EMR during the observation period. A split sample was used to develop and validate algorithms. A maximum of 180 days prior to and following each MM diagnosis was used to identify events in the diagnostic process. Of 20 algorithms explored, the baseline algorithm of 2 MM diagnoses and the 3 best performing were validated. Values for sensitivity, specificity, and positive predictive value (PPV) were calculated. Three claims-based algorithms were validated with ~10% improvement in PPV (87-94%) over prior work (81%) and the baseline algorithm (76%) and can be considered for future research. Consistent with prior work, it was found that MM diagnoses before and after tests were needed.

  14. Interior tomography in microscopic CT with image reconstruction constrained by full field of view scan at low spatial resolution

    NASA Astrophysics Data System (ADS)

    Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang

    2018-04-01

    In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.

  15. Individual pore and interconnection size analysis of macroporous ceramic scaffolds using high-resolution X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jerban, Saeed, E-mail: saeed.jerban@usherbrooke.ca

    2016-08-15

    The pore interconnection size of β-tricalcium phosphate scaffolds plays an essential role in the bone repair process. Although, the μCT technique is widely used in the biomaterial community, it is rarely used to measure the interconnection size because of the lack of algorithms. In addition, discrete nature of the μCT introduces large systematic errors due to the convex geometry of interconnections. We proposed, verified and validated a novel pore-level algorithm to accurately characterize the individual pores and interconnections. Specifically, pores and interconnections were isolated, labeled, and individually analyzed with high accuracy. The technique was verified thoroughly by visually inspecting andmore » verifying over 3474 properties of randomly selected pores. This extensive verification process has passed a one-percent accuracy criterion. Scanning errors inherent in the discretization, which lead to both dummy and significantly overestimated interconnections, have been examined using computer-based simulations and additional high-resolution scanning. Then accurate correction charts were developed and used to reduce the scanning errors. Only after the corrections, both the μCT and SEM-based results converged, and the novel algorithm was validated. Material scientists with access to all geometrical properties of individual pores and interconnections, using the novel algorithm, will have a more-detailed and accurate description of the substitute architecture and a potentially deeper understanding of the link between the geometric and biological interaction. - Highlights: •An algorithm is developed to analyze individually all pores and interconnections. •After pore isolating, the discretization errors in interconnections were corrected. •Dummy interconnections and overestimated sizes were due to thin material walls. •The isolating algorithm was verified through visual inspection (99% accurate). •After correcting for the systematic errors, algorithm was validated successfully.« less

  16. An algorithm for improving the quality of structural images of turbid media in endoscopic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Potlov, A. Yu.; Frolov, S. V.; Proskurin, S. G.

    2018-04-01

    High-quality OCT structural images reconstruction algorithm for endoscopic optical coherence tomography of biological tissue is described. The key features of the presented algorithm are: (1) raster scanning and averaging of adjacent Ascans and pixels; (2) speckle level minimization. The described algorithm can be used in the gastroenterology, urology, gynecology, otorhinolaryngology for mucous membranes and skin diagnostics in vivo and in situ.

  17. A photoacoustic imaging reconstruction method based on directional total variation with adaptive directivity.

    PubMed

    Wang, Jin; Zhang, Chen; Wang, Yuanyuan

    2017-05-30

    In photoacoustic tomography (PAT), total variation (TV) based iteration algorithm is reported to have a good performance in PAT image reconstruction. However, classical TV based algorithm fails to preserve the edges and texture details of the image because it is not sensitive to the direction of the image. Therefore, it is of great significance to develop a new PAT reconstruction algorithm to effectively solve the drawback of TV. In this paper, a directional total variation with adaptive directivity (DDTV) model-based PAT image reconstruction algorithm, which weightedly sums the image gradients based on the spatially varying directivity pattern of the image is proposed to overcome the shortcomings of TV. The orientation field of the image is adaptively estimated through a gradient-based approach. The image gradients are weighted at every pixel based on both its anisotropic direction and another parameter, which evaluates the estimated orientation field reliability. An efficient algorithm is derived to solve the iteration problem associated with DDTV and possessing directivity of the image adaptively updated for each iteration step. Several texture images with various directivity patterns are chosen as the phantoms for the numerical simulations. The 180-, 90- and 30-view circular scans are conducted. Results obtained show that the DDTV-based PAT reconstructed algorithm outperforms the filtered back-projection method (FBP) and TV algorithms in the quality of reconstructed images with the peak signal-to-noise rations (PSNR) exceeding those of TV and FBP by about 10 and 18 dB, respectively, for all cases. The Shepp-Logan phantom is studied with further discussion of multimode scanning, convergence speed, robustness and universality aspects. In-vitro experiments are performed for both the sparse-view circular scanning and linear scanning. The results further prove the effectiveness of the DDTV, which shows better results than that of the TV with sharper image edges and clearer texture details. Both numerical simulation and in vitro experiments confirm that the DDTV provides a significant quality improvement of PAT reconstructed images for various directivity patterns.

  18. Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.

    PubMed

    He, Lifeng; Chao, Yuyan; Suzuki, Kenji

    2011-08-01

    Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.

  19. Advanced characterization study of commercial conversion and electrocoating structures on magnesium alloys AZ31B and ZE10A

    DOE PAGES

    Brady, Michael P.; Leonard, Donovan N.; Meyer, III, Harry M.; ...

    2016-03-31

    The local metal-coating interface microstructure and chemistry formed on commercial magnesium alloys Mg–3Al–1Zn (AZ31B) and Mg–1Zn–0.25Zr–<0.5Nd (ZE10A, ZEK100 type) were analyzed as-chemical conversion coated with a commercial hexafluoro-titanate/zirconate type + organic polymer based treatment (Bonderite® 5200) and a commercial hexafluoro-zirconate type + trivalent chromium Cr3 + type treatment (Surtec® 650), and after the same conversion coatings followed by electrocoating with an epoxy based coating, Cathoguard® 525. Characterization techniques included scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and cross-section scanning transmission electron microscopy (STEM). Corrosion behavior was assessed in room temperature saturated aqueous Mg(OH)2 solution with 1 wt.% NaCl. Themore » goal of the effort was to assess the degree to which substrate alloy additions become enriched in the conversion coating, and how the conversion coating was impacted by subsequent electrocoating. Key findings included the enrichment of Al from AZ31B and Zr from ZE10A, respectively, into the conversion coating, with moderate corrosion resistance benefits for AZ31B when Al was incorporated. Varying degrees of increased porosity and modification of the initial conversion coating chemistry at the metal-coating interface were observed after electrocoating. These changes were postulated to result in degraded electrocoating protectiveness. As a result, these observations highlight the challenges of coating Mg, and the need to tailor electrocoating in light of potential degradation of the initial as-conversion coated Mg alloy surface.« less

  20. Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.

    PubMed

    Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero

    2008-09-01

    Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.

  1. Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines

    PubMed Central

    Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim

    2008-01-01

    This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately. PMID:27879843

  2. Research and Development of Automated Eddy Current Testing for Composite Overwrapped Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Carver, Kyle L.; Saulsberry, Regor L.; Nichols, Charles T.; Spencer, Paul R.; Lucero, Ralph E.

    2012-01-01

    Eddy current testing (ET) was used to scan bare metallic liners used in the fabrication of composite overwrapped pressure vessels (COPVs) for flaws which could result in premature failure of the vessel. The main goal of the project was to make improvements in the areas of scan signal to noise ratio, sensitivity of flaw detection, and estimation of flaw dimensions. Scan settings were optimized resulting in an increased signal to noise ratio. Previously undiscovered flaw indications were observed and investigated. Threshold criteria were determined for the system software's flaw report and estimation of flaw dimensions were brought to an acceptable level of accuracy. Computer algorithms were written to import data for filtering and a numerical derivative filtering algorithm was evaluated.

  3. Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines.

    PubMed

    Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim

    2008-04-15

    This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately.

  4. Validation of the Thematic Mapper radiometric and geometric correction algorithms

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1984-01-01

    The radiometric and geometric correction algorithms for Thematic Mapper are critical to subsequent successful information extraction. Earlier Landsat scanners, known as Multispectral Scanners, produce imagery which exhibits striping due to mismatching of detector gains and biases. Thematic Mapper exhibits the same phenomenon at three levels: detector-to-detector, scan-to-scan, and multiscan striping. The cause of these variations has been traced to variations in the dark current of the detectors. An alternative formulation has been tested and shown to be very satisfactory. Unfortunately, the Thematic Mapper detectors exhibit saturation effects suffered while viewing extensive cloud areas, and is not easily correctable. The geometric correction algorithm has been shown to be remarkably reliable. Only minor and modest improvements are indicated and shown to be effective.

  5. Autonomous control of roving vehicles for unmanned exploration of the planets

    NASA Technical Reports Server (NTRS)

    Yerazunis, S. W.

    1978-01-01

    The guidance of an autonomous rover for unmanned planetary exploration using a short range (0.5 - 3.0 meter) hazard detection system was studied. Experimental data derived from a one laser/one detector system were used in the development of improved algorithms for the guidance of the rover. The new algorithms which account for the dynamic characteristics of the Rensselaer rover can be applied to other rover concepts provided that the rover dynamic parameters are modified appropriately. The new algorithms will also be applicable to the advanced scanning system. The design of an elevation scanning laser/multisensor hazard detection system was completed. All mechanical and electronic hardware components with the exception of the sensor optics and electronic components were constructed and tested.

  6. Quantum digital-to-analog conversion algorithm using decoherence

    NASA Astrophysics Data System (ADS)

    SaiToh, Akira

    2015-08-01

    We consider the problem of mapping digital data encoded on a quantum register to analog amplitudes in parallel. It is shown to be unlikely that a fully unitary polynomial-time quantum algorithm exists for this problem; NP becomes a subset of BQP if it exists. In the practical point of view, we propose a nonunitary linear-time algorithm using quantum decoherence. It tacitly uses an exponentially large physical resource, which is typically a huge number of identical molecules. Quantumness of correlation appearing in the process of the algorithm is also discussed.

  7. A maximum power point tracking algorithm for buoy-rope-drum wave energy converters

    NASA Astrophysics Data System (ADS)

    Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.

    2016-08-01

    The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.

  8. Hotplate precipitation gauge calibrations and field measurements

    NASA Astrophysics Data System (ADS)

    Zelasko, Nicholas; Wettlaufer, Adam; Borkhuu, Bujidmaa; Burkhart, Matthew; Campbell, Leah S.; Steenburgh, W. James; Snider, Jefferson R.

    2018-01-01

    First introduced in 2003, approximately 70 Yankee Environmental Systems (YES) hotplate precipitation gauges have been purchased by researchers and operational meteorologists. A version of the YES hotplate is described in Rasmussen et al. (2011; R11). Presented here is testing of a newer version of the hotplate; this device is equipped with longwave and shortwave radiation sensors. Hotplate surface temperature, coefficients describing natural and forced convective sensible energy transfer, and radiative properties (longwave emissivity and shortwave reflectance) are reported for two of the new-version YES hotplates. These parameters are applied in a new algorithm and are used to derive liquid-equivalent accumulations (snowfall and rainfall), and these accumulations are compared to values derived by the internal algorithm used in the YES hotplates (hotplate-derived accumulations). In contrast with R11, the new algorithm accounts for radiative terms in a hotplate's energy budget, applies an energy conversion factor which does not differ from a theoretical energy conversion factor, and applies a surface area that is correct for the YES hotplate. Radiative effects are shown to be relatively unimportant for the precipitation events analyzed. In addition, this work documents a 10 % difference between the hotplate-derived and new-algorithm-derived accumulations. This difference seems consistent with R11's application of a hotplate surface area that deviates from the actual surface area of the YES hotplate and with R11's recommendation for an energy conversion factor that differs from that calculated using thermodynamic theory.

  9. Evaluation and analysis of Seasat-A scanning multichannel Microwave Radiometer (SMMR) Antenna Pattern Correction (APC) algorithm

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    The brightness temperature data produced by the SMMR final Antenna Pattern Correction (APC) algorithm is discussed. The algorithm consisted of: (1) a direct comparison of the outputs of the final and interim APC algorithms; and (2) an analysis of a possible relationship between observed cross track gradients in the interim brightness temperatures and the asymmetry in the antenna temperature data. Results indicate a bias between the brightness temperature produced by the final and interim APC algorithm.

  10. Cough Frequency During Treatment Associated With Baseline Cavitary Volume and Proximity to the Airway in Pulmonary TB.

    PubMed

    Proaño, Alvaro; Bui, David P; López, José W; Vu, Nancy M; Bravard, Marjory A; Lee, Gwenyth O; Tracey, Brian H; Xu, Ziyue; Comina, Germán; Ticona, Eduardo; Mollura, Daniel J; Friedland, Jon S; Moore, David A J; Evans, Carlton A; Caligiuri, Philip; Gilman, Robert H

    2018-06-01

    Cough frequency, and its duration, is a biomarker that can be used in low-resource settings without the need of laboratory culture and has been associated with transmission and treatment response. Radiologic characteristics associated with increased cough frequency may be important in understanding transmission. The relationship between cough frequency and cavitary lung disease has not been studied. We analyzed data in 41 adults who were HIV negative and had culture-confirmed, drug-susceptible pulmonary TB throughout treatment. Cough recordings were based on the Cayetano Cough Monitor, and sputum samples were evaluated using microscopic observation drug susceptibility broth culture; among culture-positive samples, bacillary burden was assessed by means of time to positivity. CT scans were analyzed by a US-board-certified radiologist and a computer-automated algorithm. The algorithm evaluated cavity volume and cavitary proximity to the airway. CT scans were obtained within 1 month of treatment initiation. We compared small cavities (≤ 7 mL) and large cavities (> 7 mL) and cavities located closer to (≤ 10 mm) and farther from (> 10 mm) the airway to cough frequency and cough cessation until treatment day 60. Cough frequency during treatment was twofold higher in participants with large cavity volumes (rate ratio [RR], 1.98; P = .01) and cavities located closer to the airway (RR, 2.44; P = .001). Comparably, cough ceased three times faster in participants with smaller cavities (adjusted hazard ratio [HR], 2.89; P = .06) and those farther from the airway (adjusted HR, 3.61;, P = .02). Similar results were found for bacillary burden and culture conversion during treatment. Cough frequency during treatment is greater and lasts longer in patients with larger cavities, especially those closer to the airway. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  11. An extraction algorithm of pulmonary fissures from multislice CT image

    NASA Astrophysics Data System (ADS)

    Tachibana, Hiroyuki; Saita, Shinsuke; Yasutomo, Motokatsu; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Sasagawa, Michizo; Eguchi, Kenji; Moriyama, Noriyuki

    2005-04-01

    Aging and smoking history increases number of pulmonary emphysema. Alveoli restoration destroyed by pulmonary emphysema is difficult and early direction is important. Multi-slice CT technology has been improving 3-D image analysis with higher body axis resolution and shorter scan time. And low-dose high accuracy scanning becomes available. Multi-slice CT image helps physicians with accurate measuring but huge volume of the image data takes time and cost. This paper is intended for computer added emphysema region analysis and proves effectiveness of proposed algorithm.

  12. The Seasat scanning multichannel microwave radiometer /SMMR/: Antenna pattern corrections - Development and implementation

    NASA Technical Reports Server (NTRS)

    Njoku, E. G.; Christensen, E. J.; Cofield, R. E.

    1980-01-01

    The antenna temperatures measured by the Seasat scanning multichannel microwave radiometer (SMMR) differ from the true brightness temperatures of the observed scene due to antenna pattern effects, principally from antenna sidelobe contributions and cross-polarization coupling. To provide accurate brightness temperatures convenient for geophysical parameter retrievals the antenna temperatures are processed through a series of stages, collectively known as the antenna pattern correction (APC) algorithm. A description of the development and implementation of the APC algorithm is given, along with an error analysis of the resulting brightness temperatures.

  13. The experience of patients participating in a small randomised control trial that explored two different interventions to reduce anxiety prior to an MRI scan.

    PubMed

    Tugwell-Allsup, J; Pritchard, A W

    2018-05-01

    This paper reports qualitative findings from within a larger randomised control trial where a video clip or telephone conversation with a radiographer was compared to routine appointment letter and information sheet to help alleviate anxiety prior to their MRI scan. Questionnaires consisting of three free-text response questions were administered to all of the 74 patients recruited to the MRI anxiety clinical trial. The questionnaire was designed to establish patients' experiences of the intervention they had received. These questionnaires were administered post-scan. Two participants from each trial arm were also interviewed. A thematic approach was utilised for identifying recurrent categories emerging from the qualitative data which are supported by direct quotations. Participants in the interventional groups commented positively about the provision of pre-MRI scan information they received and this was contrastable with the relatively indifferent responses observed among those who received the standard information letter. Many important themes were identified including the patients needs for clear and simplified information, the experience of anticipation when waiting for the scan, and also the informally acquired information about having an MRI scan i.e. the shared experiences of friends and family. All themes highlighted the need for an inclusive and individually tailored approach to pre-scan information provision. Qualitative data collected throughout the trial is supportive of the statistical findings, where it is asserted that the use of a short video clip or a radiographer having a short conversation with patients before their scan reduces pre-scan anxiety. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  14. National Aerospace Fuels Research Complex

    DTIC Science & Technology

    2010-03-01

    supercritical pyrolysis. 7 6. Representative chromatogram of low conversion stressed S-8 liquid product from supercritical pyrolysis on ECAT. 7 7...Representative chromatogram of very high conversion stressed S-8 liquid product from supercritical pyrolysis at UTRC. 9 8. Representative chromatogram...of stressed S-8 liquid product from supercritical pyrolysis at Louisiana State University. 9 9. GC-MS scanning total ion chromatograms of fuels

  15. Loose, Falling Characters and Sentences: The Persistence of the OCR Problem in Digital Repository E-Books

    ERIC Educational Resources Information Center

    Kichuk, Diana

    2015-01-01

    The electronic conversion of scanned image files to readable text using optical character recognition (OCR) software and the subsequent migration of raw OCR text to e-book text file formats are key remediation or media conversion technologies used in digital repository e-book production. Despite real progress, the OCR problem of reliability and…

  16. Random-walk enzymes.

    PubMed

    Mak, Chi H; Pham, Phuong; Afif, Samir A; Goodman, Myron F

    2015-09-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C→U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.

  17. Random-walk enzymes

    PubMed Central

    Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.

    2015-01-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C → U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508

  18. Random-walk enzymes

    NASA Astrophysics Data System (ADS)

    Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.

    2015-09-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C →U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.

  19. Segmentation of large periapical lesions toward dental computer-aided diagnosis in cone-beam CT scans

    NASA Astrophysics Data System (ADS)

    Rysavy, Steven; Flores, Arturo; Enciso, Reyes; Okada, Kazunori

    2008-03-01

    This paper presents an experimental study for assessing the applicability of general-purpose 3D segmentation algorithms for analyzing dental periapical lesions in cone-beam computed tomography (CBCT) scans. In the field of Endodontics, clinical studies have been unable to determine if a periapical granuloma can heal with non-surgical methods. Addressing this issue, Simon et al. recently proposed a diagnostic technique which non-invasively classifies target lesions using CBCT. Manual segmentation exploited in their study, however, is too time consuming and unreliable for real world adoption. On the other hand, many technically advanced algorithms have been proposed to address segmentation problems in various biomedical and non-biomedical contexts, but they have not yet been applied to the field of dentistry. Presented in this paper is a novel application of such segmentation algorithms to the clinically-significant dental problem. This study evaluates three state-of-the-art graph-based algorithms: a normalized cut algorithm based on a generalized eigen-value problem, a graph cut algorithm implementing energy minimization techniques, and a random walks algorithm derived from discrete electrical potential theory. In this paper, we extend the original 2D formulation of the above algorithms to segment 3D images directly and apply the resulting algorithms to the dental CBCT images. We experimentally evaluate quality of the segmentation results for 3D CBCT images, as well as their 2D cross sections. The benefits and pitfalls of each algorithm are highlighted.

  20. Attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm.

    PubMed

    Zhang, Jie; Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

  1. Attribute Index and Uniform Design Based Multiobjective Association Rule Mining with Evolutionary Algorithm

    PubMed Central

    Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption. PMID:23766683

  2. 3D Forest: An application for descriptions of three-dimensional forest structures using terrestrial LiDAR

    PubMed Central

    Krůček, Martin; Vrška, Tomáš; Král, Kamil

    2017-01-01

    Terrestrial laser scanning is a powerful technology for capturing the three-dimensional structure of forests with a high level of detail and accuracy. Over the last decade, many algorithms have been developed to extract various tree parameters from terrestrial laser scanning data. Here we present 3D Forest, an open-source non-platform-specific software application with an easy-to-use graphical user interface with the compilation of algorithms focused on the forest environment and extraction of tree parameters. The current version (0.42) extracts important parameters of forest structure from the terrestrial laser scanning data, such as stem positions (X, Y, Z), tree heights, diameters at breast height (DBH), as well as more advanced parameters such as tree planar projections, stem profiles or detailed crown parameters including convex and concave crown surface and volume. Moreover, 3D Forest provides quantitative measures of between-crown interactions and their real arrangement in 3D space. 3D Forest also includes an original algorithm of automatic tree segmentation and crown segmentation. Comparison with field data measurements showed no significant difference in measuring DBH or tree height using 3D Forest, although for DBH only the Randomized Hough Transform algorithm proved to be sufficiently resistant to noise and provided results comparable to traditional field measurements. PMID:28472167

  3. Scout-view Assisted Interior Micro-CT

    PubMed Central

    Sen Sharma, Kriti; Holzner, Christian; Vasilescu, Dragoş M.; Jin, Xin; Narayanan, Shree; Agah, Masoud; Hoffman, Eric A.; Yu, Hengyong; Wang, Ge

    2013-01-01

    Micro computed tomography (micro-CT) is a widely-used imaging technique. A challenge of micro-CT is to quantitatively reconstruct a sample larger than the field-of-view (FOV) of the detector. This scenario is characterized by truncated projections and associated image artifacts. However, for such truncated scans, a low resolution scout scan with an increased FOV is frequently acquired so as to position the sample properly. This study shows that the otherwise discarded scout scans can provide sufficient additional information to uniquely and stably reconstruct the interior region of interest. Two interior reconstruction methods are designed to utilize the multi-resolution data without a significant computational overhead. While most previous studies used numerically truncated global projections as interior data, this study uses truly hybrid scans where global and interior scans were carried out at different resolutions. Additionally, owing to the lack of standard interior micro-CT phantoms, we designed and fabricated novel interior micro-CT phantoms for this study to provide means of validation for our algorithms. Finally, two characteristic samples from separate studies were scanned to show the effect of our reconstructions. The presented methods show significant improvements over existing reconstruction algorithms. PMID:23732478

  4. High-speed scanning: an improved algorithm

    NASA Astrophysics Data System (ADS)

    Nachimuthu, A.; Hoang, Khoi

    1995-10-01

    In using machine vision for assessing an object's surface quality, many images are required to be processed in order to separate the good areas from the defective ones. Examples can be found in the leather hide grading process; in the inspection of garments/canvas on the production line; in the nesting of irregular shapes into a given surface... . The most common method of subtracting the total area from the sum of defective areas does not give an acceptable indication of how much of the `good' area can be used, particularly if the findings are to be used for the nesting of irregular shapes. This paper presents an image scanning technique which enables the estimation of useable areas within an inspected surface in terms of the user's definition, not the supplier's claims. That is, how much useable area the user can use, not the total good area as the supplier estimated. An important application of the developed technique is in the leather industry where the tanner (the supplier) and the footwear manufacturer (the user) are constantly locked in argument due to disputed quality standards of finished leather hide, which disrupts production schedules and wasted costs in re-grading, re- sorting... . The developed basic algorithm for area scanning of a digital image will be presented. The implementation of an improved scanning algorithm will be discussed in detail. The improved features include Boolean OR operations and many other innovative functions which aim at optimizing the scanning process in terms of computing time and the accurate estimation of useable areas.

  5. Inverse scattering and refraction corrected reflection for breast cancer imaging

    NASA Astrophysics Data System (ADS)

    Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John

    2010-03-01

    Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.

  6. MutScan: fast detection and visualization of target mutations by scanning FASTQ data.

    PubMed

    Chen, Shifu; Huang, Tanxiao; Wen, Tiexiang; Li, Hong; Xu, Mingyan; Gu, Jia

    2018-01-22

    Some types of clinical genetic tests, such as cancer testing using circulating tumor DNA (ctDNA), require sensitive detection of known target mutations. However, conventional next-generation sequencing (NGS) data analysis pipelines typically involve different steps of filtering, which may cause miss-detection of key mutations with low frequencies. Variant validation is also indicated for key mutations detected by bioinformatics pipelines. Typically, this process can be executed using alignment visualization tools such as IGV or GenomeBrowse. However, these tools are too heavy and therefore unsuitable for validating mutations in ultra-deep sequencing data. We developed MutScan to address problems of sensitive detection and efficient validation for target mutations. MutScan involves highly optimized string-searching algorithms, which can scan input FASTQ files to grab all reads that support target mutations. The collected supporting reads for each target mutation will be piled up and visualized using web technologies such as HTML and JavaScript. Algorithms such as rolling hash and bloom filter are applied to accelerate scanning and make MutScan applicable to detect or visualize target mutations in a very fast way. MutScan is a tool for the detection and visualization of target mutations by only scanning FASTQ raw data directly. Compared to conventional pipelines, this offers a very high performance, executing about 20 times faster, and offering maximal sensitivity since it can grab mutations with even one single supporting read. MutScan visualizes detected mutations by generating interactive pile-ups using web technologies. These can serve to validate target mutations, thus avoiding false positives. Furthermore, MutScan can visualize all mutation records in a VCF file to HTML pages for cloud-friendly VCF validation. MutScan is an open source tool available at GitHub: https://github.com/OpenGene/MutScan.

  7. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  8. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  9. Multiple Convective Cell Identification and Tracking Algorithm for documenting time-height evolution of measured polarimetric radar and lightning properties

    NASA Astrophysics Data System (ADS)

    Rosenfeld, D.; Hu, J.; Zhang, P.; Snyder, J.; Orville, R. E.; Ryzhkov, A.; Zrnic, D.; Williams, E.; Zhang, R.

    2017-12-01

    A methodology to track the evolution of the hydrometeors and electrification of convective cells is presented and applied to various convective clouds from warm showers to super-cells. The input radar data are obtained from the polarimetric NEXRAD weather radars, The information on cloud electrification is obtained from Lightning Mapping Arrays (LMA). The development time and height of the hydrometeors and electrification requires tracking the evolution and lifecycle of convective cells. A new methodology for Multi-Cell Identification and Tracking (MCIT) is presented in this study. This new algorithm is applied to time series of radar volume scans. A cell is defined as a local maximum in the Vertical Integrated Liquid (VIL), and the echo area is divided between cells using a watershed algorithm. The tracking of the cells between radar volume scans is done by identifying the two cells in consecutive radar scans that have maximum common VIL. The vertical profile of the polarimetric radar properties are used for constructing the time-height cross section of the cell properties around the peak reflectivity as a function of height. The LMA sources that occur within the cell area are integrated as a function of height as well for each time step, as determined by the radar volume scans. The result of the tracking can provide insights to the evolution of storms, hydrometer types, precipitation initiation and cloud electrification under different thermodynamic, aerosol and geographic conditions. The details of the MCIT algorithm, its products and their performance for different types of storm are described in this poster.

  10. Periprosthetic joint infections: a clinical practice algorithm.

    PubMed

    Volpe, Luigi; Indelli, Pier Francesco; Latella, Leonardo; Poli, Paolo; Yakupoglu, Jale; Marcucci, Massimiliano

    2014-01-01

    periprosthetic joint infection (PJI) accounts for 25% of failed total knee arthroplasties (TKAs) and 15% of failed total hip arthroplasties (THAs). The purpose of the present study was to design a multidisciplinary diagnostic algorithm to detect a PJI as cause of a painful TKA or THA. from April 2010 to October 2012, 111 patients with suspected PJI were evaluated. The study group comprised 75 females and 36 males with an average age of 71 years (range, 48 to 94 years). Eighty-four patients had a painful THA, while 27 reported a painful TKA. The stepwise diagnostic algorithm, applied in all the patients, included: measurement of serum C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) levels; imaging studies, including standard radiological examination, standard technetium-99m-methylene diphosphonate (MDP) bone scan (if positive, confirmation by LeukoScan was obtained); and joint aspiration with analysis of synovial fluid. following application of the stepwise diagnostic algorithm, 24 out of our 111 screened patients were classified as having a suspected PJI (21.7%). CRP and ESR levels were negative in 84 and positive in 17 cases; 93.7% of the patients had a positive technetium-labeled bone scan, and 23% a positive LeukoScan. Preoperative synovial fluid analysis was positive in 13.5%; analysis of synovial fluid obtained by preoperative aspiration showed a leucocyte count of > 3000 cells μ/l in 52% of the patients. the present study showed that the diagnosis of PJI requires the application of a multimodal diagnostic protocol in order to avoid complications related to surgical revision of a misdiagnosed "silent" PJI. Level IV, therapeutic case series.

  11. Breast surface estimation for radar-based breast imaging systems.

    PubMed

    Williams, Trevor C; Sill, Jeff M; Fear, Elise C

    2008-06-01

    Radar-based microwave breast-imaging techniques typically require the antennas to be placed at a certain distance from or on the breast surface. This requires prior knowledge of the breast location, shape, and size. The method proposed in this paper for obtaining this information is based on a modified tissue sensing adaptive radar algorithm. First, a breast surface detection scan is performed. Data from this scan are used to localize the breast by creating an estimate of the breast surface. If required, the antennas may then be placed at specified distances from the breast surface for a second tumor-sensing scan. This paper introduces the breast surface estimation and antenna placement algorithms. Surface estimation and antenna placement results are demonstrated on three-dimensional breast models derived from magnetic resonance images.

  12. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

    PubMed

    Menze, Bjoern H; Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B; Ayache, Nicholas; Buendia, Patricia; Collins, D Louis; Cordier, Nicolas; Corso, Jason J; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M; Jena, Raj; John, Nigel M; Konukoglu, Ender; Lashkari, Danial; Mariz, José Antonió; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J; Raviv, Tammy Riklin; Reza, Syed M S; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A; Sousa, Nuno; Subbanna, Nagesh K; Szekely, Gabor; Taylor, Thomas J; Thomas, Owen M; Tustison, Nicholas J; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2015-10-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.

  13. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    PubMed Central

    Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B.; Ayache, Nicholas; Buendia, Patricia; Collins, D. Louis; Cordier, Nicolas; Corso, Jason J.; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R.; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M.; Jena, Raj; John, Nigel M.; Konukoglu, Ender; Lashkari, Danial; Mariz, José António; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J.; Raviv, Tammy Riklin; Reza, Syed M. S.; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A.; Sousa, Nuno; Subbanna, Nagesh K.; Szekely, Gabor; Taylor, Thomas J.; Thomas, Owen M.; Tustison, Nicholas J.; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2016-01-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource. PMID:25494501

  14. Automated and real-time segmentation of suspicious breast masses using convolutional neural network

    PubMed Central

    Gregory, Adriana; Denis, Max; Meixner, Duane D.; Bayat, Mahdi; Whaley, Dana H.; Fatemi, Mostafa; Alizad, Azra

    2018-01-01

    In this work, a computer-aided tool for detection was developed to segment breast masses from clinical ultrasound (US) scans. The underlying Multi U-net algorithm is based on convolutional neural networks. Under the Mayo Clinic Institutional Review Board protocol, a prospective study of the automatic segmentation of suspicious breast masses was performed. The cohort consisted of 258 female patients who were clinically identified with suspicious breast masses and underwent clinical US scan and breast biopsy. The computer-aided detection tool effectively segmented the breast masses, achieving a mean Dice coefficient of 0.82, a true positive fraction (TPF) of 0.84, and a false positive fraction (FPF) of 0.01. By avoiding positioning of an initial seed, the algorithm is able to segment images in real time (13–55 ms per image), and can have potential clinical applications. The algorithm is at par with a conventional seeded algorithm, which had a mean Dice coefficient of 0.84 and performs significantly better (P< 0.0001) than the original U-net algorithm. PMID:29768415

  15. Neighbor Discovery Algorithm in Wireless Local Area Networks Using Multi-beam Directional Antennas

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Peng, Wei; Liu, Song

    2017-10-01

    Neighbor discovery is an important step for Wireless Local Area Networks (WLAN) and the use of multi-beam directional antennas can greatly improve the network performance. However, most neighbor discovery algorithms in WLAN, based on multi-beam directional antennas, can only work effectively in synchronous system but not in asynchro-nous system. And collisions at AP remain a bottleneck for neighbor discovery. In this paper, we propose two asynchrono-us neighbor discovery algorithms: asynchronous hierarchical scanning (AHS) and asynchronous directional scanning (ADS) algorithm. Both of them are based on three-way handshaking mechanism. AHS and ADS reduce collisions at AP to have a good performance in a hierarchical way and directional way respectively. In the end, the performance of the AHS and ADS are tested on OMNeT++. Moreover, it is analyzed that different application scenarios and the factors how to affect the performance of these algorithms. The simulation results show that AHS is suitable for the densely populated scenes around AP while ADS is suitable for that most of the neighborhood nodes are far from AP.

  16. Deactivation of Zeolite Catalyst H-ZSM-5 during Conversion of Methanol to Gasoline: Operando Time- and Space-Resolved X-ray Diffraction.

    PubMed

    Rojo-Gama, Daniel; Mentel, Lukasz; Kalantzopoulos, Georgios N; Pappas, Dimitrios K; Dovgaliuk, Iurii; Olsbye, Unni; Lillerud, Karl Petter; Beato, Pablo; Lundegaard, Lars F; Wragg, David S; Svelle, Stian

    2018-03-15

    The deactivation of zeolite catalyst H-ZSM-5 by coking during the conversion of methanol to hydrocarbons was monitored by high-energy space- and time-resolved operando X-ray diffraction (XRD) . Space resolution was achieved by continuous scanning along the axial length of a capillary fixed bed reactor with a time resolution of 10 s per scan. Using real structural parameters obtained from XRD, we can track the development of coke at different points in the reactor and link this to a kinetic model to correlate catalyst deactivation with structural changes occurring in the material. The "burning cigar" model of catalyst bed deactivation is directly observed in real time.

  17. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  18. Symposium N: Materials and Devices for Thermal-to-Electric Energy Conversion

    DTIC Science & Technology

    2010-08-24

    X - ray diffraction, transmission electron microscopy, scanning electron microscopy, and dynamic light scattering. Thermal conductivity measurements...SEM), X - ray diffraction (XRD) measurements as well as Raman spectroscopy. The results from these techniques indicate a clear modification...was examined by using scanning electron microscope (SEM; HITACHI S-4500 model) attached with an energy dispersive x - ray spectroscopy. The electrical

  19. Comparison and assessment of semi-automatic image segmentation in computed tomography scans for image-guided kidney surgery.

    PubMed

    Glisson, Courtenay L; Altamar, Hernan O; Herrell, S Duke; Clark, Peter; Galloway, Robert L

    2011-11-01

    Image segmentation is integral to implementing intraoperative guidance for kidney tumor resection. Results seen in computed tomography (CT) data are affected by target organ physiology as well as by the segmentation algorithm used. This work studies variables involved in using level set methods found in the Insight Toolkit to segment kidneys from CT scans and applies the results to an image guidance setting. A composite algorithm drawing on the strengths of multiple level set approaches was built using the Insight Toolkit. This algorithm requires image contrast state and seed points to be identified as input, and functions independently thereafter, selecting and altering method and variable choice as needed. Semi-automatic results were compared to expert hand segmentation results directly and by the use of the resultant surfaces for registration of intraoperative data. Direct comparison using the Dice metric showed average agreement of 0.93 between semi-automatic and hand segmentation results. Use of the segmented surfaces in closest point registration of intraoperative laser range scan data yielded average closest point distances of approximately 1 mm. Application of both inverse registration transforms from the previous step to all hand segmented image space points revealed that the distance variability introduced by registering to the semi-automatically segmented surface versus the hand segmented surface was typically less than 3 mm both near the tumor target and at distal points, including subsurface points. Use of the algorithm shortened user interaction time and provided results which were comparable to the gold standard of hand segmentation. Further, the use of the algorithm's resultant surfaces in image registration provided comparable transformations to surfaces produced by hand segmentation. These data support the applicability and utility of such an algorithm as part of an image guidance workflow.

  20. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    PubMed

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  1. Feature Tracking for High Speed AFM Imaging of Biopolymers.

    PubMed

    Hartman, Brett; Andersson, Sean B

    2018-03-31

    The scanning speed of atomic force microscopes continues to advance with some current commercial microscopes achieving on the order of one frame per second and at least one reaching 10 frames per second. Despite the success of these instruments, even higher frame rates are needed with scan ranges larger than are currently achievable. Moreover, there is a significant installed base of slower instruments that would benefit from algorithmic approaches to increasing their frame rate without requiring significant hardware modifications. In this paper, we present an experimental demonstration of high speed scanning on an existing, non-high speed instrument, through the use of a feedback-based, feature-tracking algorithm that reduces imaging time by focusing on features of interest to reduce the total imaging area. Experiments on both circular and square gratings, as well as silicon steps and DNA strands show a reduction in imaging time by a factor of 3-12 over raster scanning, depending on the parameters chosen.

  2. Nearest Neighbor Averaging and its Effect on the Critical Level and Minimum Detectable Concentration for Scanning Radiological Survey Instruments that Perform Facility Release Surveys.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L

    2014-08-01

    Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrumentmore » on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.« less

  3. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging.

    PubMed

    Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B; Jia, Xun

    2014-07-01

    4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3-0.5 mm for patients 1-3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1-1.5 min per phase. High-quality 4D-CBCT imaging based on the clinically standard 1-min 3D CBCT scanning protocol is feasible via the proposed hybrid reconstruction algorithm.

  4. Visualisation of urban airborne laser scanning data with occlusion images

    NASA Astrophysics Data System (ADS)

    Hinks, Tommy; Carr, Hamish; Gharibi, Hamid; Laefer, Debra F.

    2015-06-01

    Airborne Laser Scanning (ALS) was introduced to provide rapid, high resolution scans of landforms for computational processing. More recently, ALS has been adapted for scanning urban areas. The greater complexity of urban scenes necessitates the development of novel methods to exploit urban ALS to best advantage. This paper presents occlusion images: a novel technique that exploits the geometric complexity of the urban environment to improve visualisation of small details for better feature recognition. The algorithm is based on an inversion of traditional occlusion techniques.

  5. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    PubMed Central

    Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699

  6. An innovative thinking-based intelligent information fusion algorithm.

    PubMed

    Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  7. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    PubMed

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  8. On the Impact of Widening Vector Registers on Sequence Alignment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daily, Jeffrey A.; Kalyanaraman, Anantharaman; Krishnamoorthy, Sriram

    2016-09-22

    Vector extensions, such as SSE, have been part of the x86 since the 1990s, with applications in graphics, signal processing, and scientific applications. Although many algorithms and applications can naturally benefit from automatic vectorization techniques, there are still many that are difficult to vectorize due to their dependence on irregular data structures, dense branch operations, or data dependencies. Sequence alignment, one of the most widely used operations in bioinformatics workflows, has a computational footprint that features complex data dependencies. In this paper, we demonstrate that the trend of widening vector registers adversely affects the state-of-the-art sequence alignment algorithm based onmore » striped data layouts. We present a practically efficient SIMD implementation of a parallel scan based sequence alignment algorithm that can better exploit wider SIMD units. We conduct comprehensive workload and use case analyses to characterize the relative behavior of the striped and scan approaches and identify the best choice of algorithm based on input length and SIMD width.« less

  9. Revised motion estimation algorithm for PROPELLER MRI.

    PubMed

    Pipe, James G; Gibbs, Wende N; Li, Zhiqiang; Karis, John P; Schar, Michael; Zwart, Nicholas R

    2014-08-01

    To introduce a new algorithm for estimating data shifts (used for both rotation and translation estimates) for motion-corrected PROPELLER MRI. The method estimates shifts for all blades jointly, emphasizing blade-pair correlations that are both strong and more robust to noise. The heads of three volunteers were scanned using a PROPELLER acquisition while they exhibited various amounts of motion. All data were reconstructed twice, using motion estimates from the original and new algorithm. Two radiologists independently and blindly compared 216 image pairs from these scans, ranking the left image as substantially better or worse than, slightly better or worse than, or equivalent to the right image. In the aggregate of 432 scores, the new method was judged substantially better than the old method 11 times, and was never judged substantially worse. The new algorithm compared favorably with the old in its ability to estimate bulk motion in a limited study of volunteer motion. A larger study of patients is planned for future work. Copyright © 2013 Wiley Periodicals, Inc.

  10. Optimisation and evaluation of hyperspectral imaging system using machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Suthar, Gajendra; Huang, Jung Y.; Chidangil, Santhosh

    2017-10-01

    Hyperspectral imaging (HSI), also called imaging spectrometer, originated from remote sensing. Hyperspectral imaging is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the objects physiology, morphology, and composition. The present work involves testing and evaluating the performance of the hyperspectral imaging system. The methodology involved manually taking reflectance of the object in many images or scan of the object. The object used for the evaluation of the system was cabbage and tomato. The data is further converted to the required format and the analysis is done using machine learning algorithm. The machine learning algorithms applied were able to distinguish between the object present in the hypercube obtain by the scan. It was concluded from the results that system was working as expected. This was observed by the different spectra obtained by using the machine-learning algorithm.

  11. Transcript mapping for handwritten English documents

    NASA Astrophysics Data System (ADS)

    Jose, Damien; Bharadwaj, Anurag; Govindaraju, Venu

    2008-01-01

    Transcript mapping or text alignment with handwritten documents is the automatic alignment of words in a text file with word images in a handwritten document. Such a mapping has several applications in fields ranging from machine learning where large quantities of truth data are required for evaluating handwriting recognition algorithms, to data mining where word image indexes are used in ranked retrieval of scanned documents in a digital library. The alignment also aids "writer identity" verification algorithms. Interfaces which display scanned handwritten documents may use this alignment to highlight manuscript tokens when a person examines the corresponding transcript word. We propose an adaptation of the True DTW dynamic programming algorithm for English handwritten documents. The integration of the dissimilarity scores from a word-model word recognizer and Levenshtein distance between the recognized word and lexicon word, as a cost metric in the DTW algorithm leading to a fast and accurate alignment, is our primary contribution. Results provided, confirm the effectiveness of our approach.

  12. Combined neural network/Phillips-Tikhonov approach to aerosol retrievals over land from the NASA Research Scanning Polarimeter

    NASA Astrophysics Data System (ADS)

    Di Noia, Antonio; Hasekamp, Otto P.; Wu, Lianghai; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John E.

    2017-11-01

    In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements - combining neural networks and an iterative scheme based on Phillips-Tikhonov regularization - is described. The algorithm - which is an extension of a scheme previously designed for ground-based retrievals - is applied to measurements from the Research Scanning Polarimeter (RSP) on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips-Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.

  13. Fuzzy Clustering Applied to ROI Detection in Helical Thoracic CT Scans with a New Proposal and Variants

    PubMed Central

    Castro, Alfonso; Boveda, Carmen; Arcay, Bernardino; Sanjurjo, Pedro

    2016-01-01

    The detection of pulmonary nodules is one of the most studied problems in the field of medical image analysis due to the great difficulty in the early detection of such nodules and their social impact. The traditional approach involves the development of a multistage CAD system capable of informing the radiologist of the presence or absence of nodules. One stage in such systems is the detection of ROI (regions of interest) that may be nodules in order to reduce the space of the problem. This paper evaluates fuzzy clustering algorithms that employ different classification strategies to achieve this goal. After characterising these algorithms, the authors propose a new algorithm and different variations to improve the results obtained initially. Finally it is shown as the most recent developments in fuzzy clustering are able to detect regions that may be nodules in CT studies. The algorithms were evaluated using helical thoracic CT scans obtained from the database of the LIDC (Lung Image Database Consortium). PMID:27517049

  14. Building a medical image processing algorithm verification database

    NASA Astrophysics Data System (ADS)

    Brown, C. Wayne

    2000-06-01

    The design of a database containing head Computed Tomography (CT) studies is presented, along with a justification for the database's composition. The database will be used to validate software algorithms that screen normal head CT studies from studies that contain pathology. The database is designed to have the following major properties: (1) a size sufficient for statistical viability, (2) inclusion of both normal (no pathology) and abnormal scans, (3) inclusion of scans due to equipment malfunction, technologist error, and uncooperative patients, (4) inclusion of data sets from multiple scanner manufacturers, (5) inclusion of data sets from different gender and age groups, and (6) three independent diagnosis of each data set. Designed correctly, the database will provide a partial basis for FDA (United States Food and Drug Administration) approval of image processing algorithms for clinical use. Our goal for the database is the proof of viability of screening head CT's for normal anatomy using computer algorithms. To put this work into context, a classification scheme for 'computer aided diagnosis' systems is proposed.

  15. Classification of underground pipe scanned images using feature extraction and neuro-fuzzy algorithm.

    PubMed

    Sinha, S K; Karray, F

    2002-01-01

    Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.

  16. Trajectory optimization of spacecraft high-thrust orbit transfer using a modified evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shirazi, Abolfazl

    2016-10-01

    This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, C; Seduk, J; Yang, T

    Purpose: A prototype actives scanning beam delivery system was designed, manufactured and installed as a part of the Korea Heavy Ion Medical Accelerator Project. The prototype system includes the most components for steering, modulating, detecting incident beam to patient. The system was installed in MC-50 cyclotron beam line and tested to extract the normal operation conditions. Methods: The commissioning process was completed by using 45 MeV of proton beam. To measure the beam position accuracy along the scanning magnet power supply current, 25 different spots were scanning and measured. The scanning results on GaF film were compared with the irradiationmore » plan. Also, the beam size variation and the intensity reduction using range shifter were measured and analyzed. The results will be used for creating a conversion factors for asymmetric behavior of scanning magnets and a dose compensation factor for longitudinal direction. Results: The results show asymmetry operations on both scanning × and y magnet. In case of scanning magnet × operation, the current to position conversion factors were measured 1.69 mm/A for positive direction and 1.74 mm/A for negative direction. The scanning magnet y operation shows 1.38mm/A and 1.48 mm/A for both directions. The size of incoming beam which was 18 mm as sigma becomes larger up to 55 mm as sigma while using 10 mm of the range shifter plate. As the beam size becomes large, the maximum intensity of the was decreased. In case of using 10 mm of range shifter, the maximum intensity was only 52% compared with no range shifter insertion. Conclusion: For the appropriate operation of the prototype active scanning system, the commissioning process were performed to measure the beam characteristics variation. The obtained results would be applied on the irradiation planning software for more precise dose delivery using the active scanning system.« less

  18. Visible-infrared micro-spectrometer based on a preaggregated silver nanoparticle monolayer film and an infrared sensor card

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Peng, Jing-xiao; Ho, Ho-pui; Song, Chun-yuan; Huang, Xiao-li; Zhu, Yong-yuan; Li, Xing-ao; Huang, Wei

    2018-01-01

    By using a preaggregated silver nanoparticle monolayer film and an infrared sensor card, we demonstrate a miniature spectrometer design that covers a broad wavelength range from visible to infrared with high spectral resolution. The spectral contents of an incident probe beam are reconstructed by solving a matrix equation with a smoothing simulated annealing algorithm. The proposed spectrometer offers significant advantages over current instruments that are based on Fourier transform and grating dispersion, in terms of size, resolution, spectral range, cost and reliability. The spectrometer contains three components, which are used for dispersion, frequency conversion and detection. Disordered silver nanoparticles in dispersion component reduce the fabrication complexity. An infrared sensor card in the conversion component broaden the operational spectral range of the system into visible and infrared bands. Since the CCD used in the detection component provides very large number of intensity measurements, one can reconstruct the final spectrum with high resolution. An additional feature of our algorithm for solving the matrix equation, which is suitable for reconstructing both broadband and narrowband signals, we have adopted a smoothing step based on a simulated annealing algorithm. This algorithm improve the accuracy of the spectral reconstruction.

  19. Partial scan artifact reduction (PSAR) for the assessment of cardiac perfusion in dynamic phase-correlated CT.

    PubMed

    Stenner, Philip; Schmidt, Bernhard; Bruder, Herbert; Allmendinger, Thomas; Haberland, Ulrike; Flohr, Thomas; Kachelriess, Marc

    2009-12-01

    Cardiac CT achieves its high temporal resolution by lowering the scan range from 2pi to pi plus fan angle (partial scan). This, however, introduces CT-value variations, depending on the angular position of the pi range. These partial scan artifacts are of the order of a few HU and prevent the quantitative evaluation of perfusion measurements. The authors present the new algorithm partial scan artifact reduction (PSAR) that corrects a dynamic phase-correlated scan without a priori information. In general, a full scan does not suffer from partial scan artifacts since all projections in [0, 2pi] contribute to the data. To maintain the optimum temporal resolution and the phase correlation, PSAR creates an artificial full scan pn(AF) by projectionwise averaging a set of neighboring partial scans pn(P) from the same perfusion examination (typically N approximately 30 phase-correlated partial scans distributed over 20 s and n = 1, ..., N). Corresponding to the angular range of each partial scan, the authors extract virtual partial scans pn(V) from the artificial full scan pn(AF). A standard reconstruction yields the corresponding images fn(P), fn(AF), and fn(V). Subtracting the virtual partial scan image fn(V) from the artificial full scan image fn(AF) yields an artifact image that can be used to correct the original partial scan image: fn(C) = fn(P) - fn(V) + fn(AF), where fn(C) is the corrected image. The authors evaluated the effects of scattered radiation on the partial scan artifacts using simulated and measured water phantoms and found a strong correlation. The PSAR algorithm has been validated with a simulated semianthropomorphic heart phantom and with measurements of a dynamic biological perfusion phantom. For the stationary phantoms, real full scans have been performed to provide theoretical reference values. The improvement in the root mean square errors between the full and the partial scans with respect to the errors between the full and the corrected scans is up to 54% for the simulations and 90% for the measurements. The phase-correlated data now appear accurate enough for a quantitative analysis of cardiac perfusion.

  20. How to Create a Web-Ready PDF

    EPA Pesticide Factsheets

    Making EPA's PDF documents accessible (by Section 508 standards) and user-friendly includes steps such as adding bookmarks, using electronic conversion rather than scanning pages, and adding metadata.

  1. On the use of video projectors for three-dimensional scanning

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.; Robledo-Sanchez, Carlos; Diaz-Gonzalez, Gerardo

    2017-08-01

    Structured light projection is one of the most useful methods for accurate three-dimensional scanning. Video projectors are typically used as the illumination source. However, because video projectors are not designed for structured light systems, some considerations such as gamma calibration must be taken into account. In this work, we present a simple method for gamma calibration of video projectors. First, the experimental fringe patterns are normalized. Then, the samples of the fringe patterns are sorted in ascending order. The sample sorting leads to a simple three-parameter sine curve that is fitted using the Gauss-Newton algorithm. The novelty of this method is that the sorting process removes the effect of the unknown phase. Thus, the resulting gamma calibration algorithm is significantly simplified. The feasibility of the proposed method is illustrated in a three-dimensional scanning experiment.

  2. SU-E-J-36: Comparison of CBCT Image Quality for Manufacturer Default Imaging Modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, G

    Purpose CBCT is being increasingly used in patient setup for radiotherapy. Often the manufacturer default scan modes are used for performing these CBCT scans with the assumption that they are the best options. To quantitatively assess the image quality of these scan modes, all of the scan modes were tested as well as options with the reconstruction algorithm. Methods A CatPhan 504 phantom was scanned on a TrueBeam Linear Accelerator using the manufacturer scan modes (FSRT Head, Head, Image Gently, Pelvis, Pelvis Obese, Spotlight, & Thorax). The Head mode scan was then reconstructed multiple times with all filter options (Smooth,more » Standard, Sharp, & Ultra Sharp) and all Ring Suppression options (Disabled, Weak, Medium, & Strong). An open source ImageJ tool was created for analyzing the CatPhan 504 images. Results The MTF curve was primarily dictated by the voxel size and the filter used in the reconstruction algorithm. The filters also impact the image noise. The CNR was worst for the Image Gently mode, followed by FSRT Head and Head. The sharper the filter, the worse the CNR. HU varied significantly between scan modes. Pelvis Obese had lower than expected HU values than most while the Image Gently mode had higher than expected HU values. If a therapist tried to use preset window and level settings, they would not show the desired tissue for some scan modes. Conclusion Knowing the image quality of the set scan modes, will enable users to better optimize their setup CBCT. Evaluation of the scan mode image quality could improve setup efficiency and lead to better treatment outcomes.« less

  3. A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion

    NASA Astrophysics Data System (ADS)

    Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen

    2017-09-01

    In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.

  4. Soil moisture and temperature algorithms and validation

    USDA-ARS?s Scientific Manuscript database

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  5. Woofer-tweeter adaptive optics scanning laser ophthalmoscopic imaging based on Lagrange-multiplier damped least-squares algorithm.

    PubMed

    Zou, Weiyao; Qi, Xiaofeng; Burns, Stephen A

    2011-07-01

    We implemented a Lagrange-multiplier (LM)-based damped least-squares (DLS) control algorithm in a woofer-tweeter dual deformable-mirror (DM) adaptive optics scanning laser ophthalmoscope (AOSLO). The algorithm uses data from a single Shack-Hartmann wavefront sensor to simultaneously correct large-amplitude low-order aberrations by a woofer DM and small-amplitude higher-order aberrations by a tweeter DM. We measured the in vivo performance of high resolution retinal imaging with the dual DM AOSLO. We compared the simultaneous LM-based DLS dual DM controller with both single DM controller, and a successive dual DM controller. We evaluated performance using both wavefront (RMS) and image quality metrics including brightness and power spectrum. The simultaneous LM-based dual DM AO can consistently provide near diffraction-limited in vivo routine imaging of human retina.

  6. Free-breathing 3D Cardiac MRI Using Iterative Image-Based Respiratory Motion Correction

    PubMed Central

    Moghari, Mehdi H.; Roujol, Sébastien; Chan, Raymond H.; Hong, Susie N.; Bello, Natalie; Henningsson, Markus; Ngo, Long H.; Goddu, Beth; Goepfert, Lois; Kissinger, Kraig V.; Manning, Warren J.; Nezafat, Reza

    2012-01-01

    Respiratory motion compensation using diaphragmatic navigator (NAV) gating with a 5 mm gating window is conventionally used for free-breathing cardiac MRI. Due to the narrow gating window, scan efficiency is low resulting in long scan times, especially for patients with irregular breathing patterns. In this work, a new retrospective motion compensation algorithm is presented to reduce the scan time for free-breathing cardiac MRI that increasing the gating window to 15 mm without compromising image quality. The proposed algorithm iteratively corrects for respiratory-induced cardiac motion by optimizing the sharpness of the heart. To evaluate this technique, two coronary MRI datasets with 1.3 mm3 resolution were acquired from 11 healthy subjects (7 females, 25±9 years); one using a NAV with a 5 mm gating window acquired in 12.0±2.0 minutes and one with a 15 mm gating window acquired in 7.1±1.0 minutes. The images acquired with a 15 mm gating window were corrected using the proposed algorithm and compared to the uncorrected images acquired with the 5 mm and 15 mm gating windows. The image quality score, sharpness, and length of the three major coronary arteries were equivalent between the corrected images and the images acquired with a 5 mm gating window (p-value>0.05), while the scan time was reduced by a factor of 1.7. PMID:23132549

  7. Dual Energy Tomosynthesis breast phantom imaging

    NASA Astrophysics Data System (ADS)

    Koukou, V.; Martini, N.; Fountos, G.; Messaris, G.; Michail, C.; Kandarakis, I.; Nikiforidis, G.

    2017-12-01

    Dual energy (DE) imaging technique has been applied to many theoretical and experimental studies. The aim of the current study is to evaluate dual energy in breast tomosynthesis using commercial tomosynthesis system in terms of its potential to better visualize microcalcifications (μCs). The system uses a tungsten target X-ray tube and a selenium direct conversion detector. Low-energy (LE) images were acquired at different tube voltages (28, 30, 32 kV), while high-energy images at 49 kV. Fifteen projections, for the low- and high-energy respectively, were acquired without grid while tube scanned continuously. Log-subtraction algorithm was used in order to obtain the DE images with the weighting factor, w, derived empirically. The subtraction was applied to each pair of LE and HE slices after reconstruction. The TORMAM phantom was imaged with the different settings. Four regions-of-interest including μCs were identified in the inhomogeneous part of the phantom. The μCs in DE images were more clearly visible compared to the low-energy images. Initial results showed that DE tomosynthesis imaging is a promising modality, however more work is required.

  8. Imaging of cardiac perfusion of free-breathing small animals using dynamic phase-correlated micro-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawall, Stefan; Kuntz, Jan; Socher, Michaela

    Purpose:Mouse models of cardiac diseases have proven to be a valuable tool in preclinical research. The high cardiac and respiratory rates of free breathing mice prohibit conventional in vivo cardiac perfusion studies using computed tomography even if gating methods are applied. This makes a sacrification of the animals unavoidable and only allows for the application of ex vivo methods. Methods: To overcome this issue the authors propose a low dose scan protocol and an associated reconstruction algorithm that allows for in vivo imaging of cardiac perfusion and associated processes that are retrospectively synchronized to the respiratory and cardiac motion ofmore » the animal. The scan protocol consists of repetitive injections of contrast media within several consecutive scans while the ECG, respiratory motion, and timestamp of contrast injection are recorded and synchronized to the acquired projections. The iterative reconstruction algorithm employs a six-dimensional edge-preserving filter to provide low-noise, motion artifact-free images of the animal examined using the authors' low dose scan protocol. Results: The reconstructions obtained show that the complete temporal bolus evolution can be visualized and quantified in any desired combination of cardiac and respiratory phase including reperfusion phases. The proposed reconstruction method thereby keeps the administered radiation dose at a minimum and thus reduces metabolic inference to the animal allowing for longitudinal studies. Conclusions: The authors' low dose scan protocol and phase-correlated dynamic reconstruction algorithm allow for an easy and effective way to visualize phase-correlated perfusion processes in routine laboratory studies using free-breathing mice.« less

  9. Improving best-phase image quality in cardiac CT by motion correction with MAM optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohkohl, Christopher; Bruder, Herbert; Stierstorfer, Karl

    2013-03-15

    Purpose: Research in image reconstruction for cardiac CT aims at using motion correction algorithms to improve the image quality of the coronary arteries. The key to those algorithms is motion estimation, which is currently based on 3-D/3-D registration to align the structures of interest in images acquired in multiple heart phases. The need for an extended scan data range covering several heart phases is critical in terms of radiation dose to the patient and limits the clinical potential of the method. Furthermore, literature reports only slight quality improvements of the motion corrected images when compared to the most quiet phasemore » (best-phase) that was actually used for motion estimation. In this paper a motion estimation algorithm is proposed which does not require an extended scan range but works with a short scan data interval, and which markedly improves the best-phase image quality. Methods: Motion estimation is based on the definition of motion artifact metrics (MAM) to quantify motion artifacts in a 3-D reconstructed image volume. The authors use two different MAMs, entropy, and positivity. By adjusting the motion field parameters, the MAM of the resulting motion-compensated reconstruction is optimized using a gradient descent procedure. In this way motion artifacts are minimized. For a fast and practical implementation, only analytical methods are used for motion estimation and compensation. Both the MAM-optimization and a 3-D/3-D registration-based motion estimation algorithm were investigated by means of a computer-simulated vessel with a cardiac motion profile. Image quality was evaluated using normalized cross-correlation (NCC) with the ground truth template and root-mean-square deviation (RMSD). Four coronary CT angiography patient cases were reconstructed to evaluate the clinical performance of the proposed method. Results: For the MAM-approach, the best-phase image quality could be improved for all investigated heart phases, with a maximum improvement of the NCC value by 100% and of the RMSD value by 81%. The corresponding maximum improvements for the registration-based approach were 20% and 40%. In phases with very rapid motion the registration-based algorithm obtained better image quality, while the image quality of the MAM algorithm was superior in phases with less motion. The image quality improvement of the MAM optimization was visually confirmed for the different clinical cases. Conclusions: The proposed method allows a software-based best-phase image quality improvement in coronary CT angiography. A short scan data interval at the target heart phase is sufficient, no additional scan data in other cardiac phases are required. The algorithm is therefore directly applicable to any standard cardiac CT acquisition protocol.« less

  10. Application of DIRI dynamic infrared imaging in reconstructive surgery

    NASA Astrophysics Data System (ADS)

    Pawlowski, Marek; Wang, Chengpu; Jin, Feng; Salvitti, Matthew; Tenorio, Xavier

    2006-04-01

    We have developed the BioScanIR System based on QWIP (Quantum Well Infrared Photodetector). Data collected by this sensor are processed using the DIRI (Dynamic Infrared Imaging) algorithms. The combination of DIRI data processing methods with the unique characteristics of the QWIP sensor permit the creation of a new imaging modality capable of detecting minute changes in temperature at the surface of the tissue and organs associated with blood perfusion due to certain diseases such as cancer, vascular disease and diabetes. The BioScanIR System has been successfully applied in reconstructive surgery to localize donor flap feeding vessels (perforators) during the pre-surgical planning stage. The device is also used in post-surgical monitoring of skin flap perfusion. Since the BioScanIR is mobile; it can be moved to the bedside for such monitoring. In comparison to other modalities, the BioScanIR can localize perforators in a single, 20 seconds scan with definitive results available in minutes. The algorithms used include (FFT) Fast Fourier Transformation, motion artifact correction, spectral analysis and thermal image scaling. The BioScanIR is completely non-invasive and non-toxic, requires no exogenous contrast agents and is free of ionizing radiation. In addition to reconstructive surgery applications, the BioScanIR has shown promise as a useful functional imaging modality in neurosurgery, drug discovery in pre-clinical animal models, wound healing and peripheral vascular disease management.

  11. Translational-circular scanning for magneto-acoustic tomography with current injection.

    PubMed

    Wang, Shigang; Ma, Ren; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-01-27

    Magneto-acoustic tomography with current injection involves using electrical impedance imaging technology. To explore the potential applications in imaging biological tissue and enhance image quality, a new scan mode for the transducer is proposed that is based on translational and circular scanning to record acoustic signals from sources. An imaging algorithm to analyze these signals is developed in respect to this alternative scanning scheme. Numerical simulations and physical experiments were conducted to evaluate the effectiveness of this scheme. An experiment using a graphite sheet as a tissue-mimicking phantom medium was conducted to verify simulation results. A pulsed voltage signal was applied across the sample, and acoustic signals were recorded as the transducer performed stepped translational or circular scans. The imaging algorithm was used to obtain an acoustic-source image based on the signals. In simulations, the acoustic-source image is correlated with the conductivity at the sample boundaries of the sample, but image results change depending on distance and angular aspect of the transducer. In general, as angle and distance decreases, the image quality improves. Moreover, experimental data confirmed the correlation. The acoustic-source images resulting from the alternative scanning mode has yielded the outline of a phantom medium. This scan mode enables improvements to be made in the sensitivity of the detecting unit and a change to a transducer array that would improve the efficiency and accuracy of acoustic-source images.

  12. SU-F-BRB-05: Collision Avoidance Mapping Using Consumer 3D Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardan, R; Popple, R

    2015-06-15

    Purpose: To develop a fast and economical method of scanning a patient’s full body contour for use in collision avoidance mapping without the use of ionizing radiation. Methods: Two consumer level 3D cameras used in electronic gaming were placed in a CT simulator room to scan a phantom patient set up in a high collision probability position. A registration pattern and computer vision algorithms were used to transform the scan into the appropriate coordinate systems. The cameras were then used to scan the surface of a gantry in the treatment vault. Each scan was converted into a polygon mesh formore » collision testing in a general purpose polygon interference algorithm. All clinically relevant transforms were applied to the gantry and patient support to create a map of all possible collisions. The map was then tested for accuracy by physically testing the collisions with the phantom in the vault. Results: The scanning fidelity of both the gantry and patient was sufficient to produce a collision prediction accuracy of 97.1% with 64620 geometry states tested in 11.5 s. The total scanning time including computation, transformation, and generation was 22.3 seconds. Conclusion: Our results demonstrate an economical system to generate collision avoidance maps. Future work includes testing the speed of the framework in real-time collision avoidance scenarios. Research partially supported by a grant from Varian Medical Systems.« less

  13. Automatic, accurate, and reproducible segmentation of the brain and cerebro-spinal fluid in T1-weighted volume MRI scans and its application to serial cerebral and intracranial volumetry

    NASA Astrophysics Data System (ADS)

    Lemieux, Louis

    2001-07-01

    A new fully automatic algorithm for the segmentation of the brain and cerebro-spinal fluid (CSF) from T1-weighted volume MRI scans of the head was specifically developed in the context of serial intra-cranial volumetry. The method is an extension of a previously published brain extraction algorithm. The brain mask is used as a basis for CSF segmentation based on morphological operations, automatic histogram analysis and thresholding. Brain segmentation is then obtained by iterative tracking of the brain-CSF interface. Grey matter (GM), white matter (WM) and CSF volumes are calculated based on a model of intensity probability distribution that includes partial volume effects. Accuracy was assessed using a digital phantom scan. Reproducibility was assessed by segmenting pairs of scans from 20 normal subjects scanned 8 months apart and 11 patients with epilepsy scanned 3.5 years apart. Segmentation accuracy as measured by overlap was 98% for the brain and 96% for the intra-cranial tissues. The volume errors were: total brain (TBV): -1.0%, intra-cranial (ICV):0.1%, CSF: +4.8%. For repeated scans, matching resulted in improved reproducibility. In the controls, the coefficient of reliability (CR) was 1.5% for the TVB and 1.0% for the ICV. In the patients, the Cr for the ICV was 1.2%.

  14. Automatic segmentation of vessels in in-vivo ultrasound scans

    NASA Astrophysics Data System (ADS)

    Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin; Arendt Jensen, Jørgen

    2017-03-01

    Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers "8L2 Linear" and "10L2w Wide Linear" (BK Ultrasound, Herlev, Denmark). The algorithm was evaluated empirically and applied to a dataset of in-vivo 1770 images recorded from 8 healthy subjects. The segmentation results were compared to manual delineation performed by two experienced users. The results showed a sensitivity and specificity of 90.41+/-11.2 % and 97.93+/-5.7% (mean+/-standard deviation), respectively. The amount of overlap of segmentation and manual segmentation, was measured by the Dice similarity coefficient, which was 91.25+/-11.6%. The empirical results demonstrated the feasibility of segmenting the vessel lumen in ultrasound scans using a fully automatic algorithm.

  15. Visualization and unsupervised predictive clustering of high-dimensional multimodal neuroimaging data.

    PubMed

    Mwangi, Benson; Soares, Jair C; Hasan, Khader M

    2014-10-30

    Neuroimaging machine learning studies have largely utilized supervised algorithms - meaning they require both neuroimaging scan data and corresponding target variables (e.g. healthy vs. diseased) to be successfully 'trained' for a prediction task. Noticeably, this approach may not be optimal or possible when the global structure of the data is not well known and the researcher does not have an a priori model to fit the data. We set out to investigate the utility of an unsupervised machine learning technique; t-distributed stochastic neighbour embedding (t-SNE) in identifying 'unseen' sample population patterns that may exist in high-dimensional neuroimaging data. Multimodal neuroimaging scans from 92 healthy subjects were pre-processed using atlas-based methods, integrated and input into the t-SNE algorithm. Patterns and clusters discovered by the algorithm were visualized using a 2D scatter plot and further analyzed using the K-means clustering algorithm. t-SNE was evaluated against classical principal component analysis. Remarkably, based on unlabelled multimodal scan data, t-SNE separated study subjects into two very distinct clusters which corresponded to subjects' gender labels (cluster silhouette index value=0.79). The resulting clusters were used to develop an unsupervised minimum distance clustering model which identified 93.5% of subjects' gender. Notably, from a neuropsychiatric perspective this method may allow discovery of data-driven disease phenotypes or sub-types of treatment responders. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    NASA Astrophysics Data System (ADS)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  17. Comparative analysis of semantic localization accuracies between adult and pediatric DICOM CT images

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan; Pathak, Sayan D.; Criminisi, Antonio; White, Steve; Haynor, David; Chen, Oliver; Siddiqui, Khan

    2012-02-01

    Existing literature describes a variety of techniques for semantic annotation of DICOM CT images, i.e. the automatic detection and localization of anatomical structures. Semantic annotation facilitates enhanced image navigation, linkage of DICOM image content and non-image clinical data, content-based image retrieval, and image registration. A key challenge for semantic annotation algorithms is inter-patient variability. However, while the algorithms described in published literature have been shown to cope adequately with the variability in test sets comprising adult CT scans, the problem presented by the even greater variability in pediatric anatomy has received very little attention. Most existing semantic annotation algorithms can only be extended to work on scans of both adult and pediatric patients by adapting parameters heuristically in light of patient size. In contrast, our approach, which uses random regression forests ('RRF'), learns an implicit model of scale variation automatically using training data. In consequence, anatomical structures can be localized accurately in both adult and pediatric CT studies without the need for parameter adaptation or additional information about patient scale. We show how the RRF algorithm is able to learn scale invariance from a combined training set containing a mixture of pediatric and adult scans. Resulting localization accuracy for both adult and pediatric data remains comparable with that obtained using RRFs trained and tested using only adult data.

  18. Dual scan CT image recovery from truncated projections

    NASA Astrophysics Data System (ADS)

    Sarkar, Shubhabrata; Wahi, Pankaj; Munshi, Prabhat

    2017-12-01

    There are computerized tomography (CT) scanners available commercially for imaging small objects and they are often categorized as mini-CT X-ray machines. One major limitation of these machines is their inability to scan large objects with good image quality because of the truncation of projection data. An algorithm is proposed in this work which enables such machines to scan large objects while maintaining the quality of the recovered image.

  19. Expanding the use of administrative claims databases in conducting clinical real-world evidence studies in multiple sclerosis.

    PubMed

    Capkun, Gorana; Lahoz, Raquel; Verdun, Elisabetta; Song, Xue; Chen, Weston; Korn, Jonathan R; Dahlke, Frank; Freitas, Rita; Fraeman, Kathy; Simeone, Jason; Johnson, Barbara H; Nordstrom, Beth

    2015-05-01

    Administrative claims databases provide a wealth of data for assessing the effect of treatments in clinical practice. Our aim was to propose methodology for real-world studies in multiple sclerosis (MS) using these databases. In three large US administrative claims databases: MarketScan, PharMetrics Plus and Department of Defense (DoD), patients with MS were selected using an algorithm identified in the published literature and refined for accuracy. Algorithms for detecting newly diagnosed ('incident') MS cases were also refined and tested. Methodology based on resource and treatment use was developed to differentiate between relapses with and without hospitalization. When various patient selection criteria were applied to the MarketScan database, an algorithm requiring two MS diagnoses at least 30 days apart was identified as the preferred method of selecting patient cohorts. Attempts to detect incident MS cases were confounded by the limited continuous enrollment of patients in these databases. Relapse detection algorithms identified similar proportions of patients in the MarketScan and PharMetrics Plus databases experiencing relapses with (2% in both databases) and without (15-20%) hospitalization in the 1 year follow-up period, providing findings in the range of those in the published literature. Additional validation of the algorithms proposed here would increase their credibility. The methods suggested in this study offer a good foundation for performing real-world research in MS using administrative claims databases, potentially allowing evidence from different studies to be compared and combined more systematically than in current research practice.

  20. Effects of Iterative Reconstruction Algorithms on Computer-assisted Detection (CAD) Software for Lung Nodules in Ultra-low-dose CT for Lung Cancer Screening.

    PubMed

    Nomura, Yukihiro; Higaki, Toru; Fujita, Masayo; Miki, Soichiro; Awaya, Yoshikazu; Nakanishi, Toshio; Yoshikawa, Takeharu; Hayashi, Naoto; Awai, Kazuo

    2017-02-01

    This study aimed to evaluate the effects of iterative reconstruction (IR) algorithms on computer-assisted detection (CAD) software for lung nodules in ultra-low-dose computed tomography (ULD-CT) for lung cancer screening. We selected 85 subjects who underwent both a low-dose CT (LD-CT) scan and an additional ULD-CT scan in our lung cancer screening program for high-risk populations. The LD-CT scans were reconstructed with filtered back projection (FBP; LD-FBP). The ULD-CT scans were reconstructed with FBP (ULD-FBP), adaptive iterative dose reduction 3D (AIDR 3D; ULD-AIDR 3D), and forward projected model-based IR solution (FIRST; ULD-FIRST). CAD software for lung nodules was applied to each image dataset, and the performance of the CAD software was compared among the different IR algorithms. The mean volume CT dose indexes were 3.02 mGy (LD-CT) and 0.30 mGy (ULD-CT). For overall nodules, the sensitivities of CAD software at 3.0 false positives per case were 78.7% (LD-FBP), 9.3% (ULD-FBP), 69.4% (ULD-AIDR 3D), and 77.8% (ULD-FIRST). Statistical analysis showed that the sensitivities of ULD-AIDR 3D and ULD-FIRST were significantly higher than that of ULD-FBP (P < .001). The performance of CAD software in ULD-CT was improved by using IR algorithms. In particular, the performance of CAD in ULD-FIRST was almost equivalent to that in LD-FBP. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  1. Preliminary Design and Analysis of the GIFTS Instrument Pointing System

    NASA Technical Reports Server (NTRS)

    Zomkowski, Paul P.

    2003-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Instrument is the next generation spectrometer for remote sensing weather satellites. The GIFTS instrument will be used to perform scans of the Earth s atmosphere by assembling a series of field-of- views (FOV) into a larger pattern. Realization of this process is achieved by step scanning the instrument FOV in a contiguous fashion across any desired portion of the visible Earth. A 2.3 arc second pointing stability, with respect to the scanning instrument, must be maintained for the duration of the FOV scan. A star tracker producing attitude data at 100 Hz rate will be used by the autonomous pointing algorithm to precisely track target FOV s on the surface of the Earth. The main objective is to validate the pointing algorithm in the presence of spacecraft disturbances and determine acceptable disturbance limits from expected noise sources. Proof of concept validation of the pointing system algorithm is carried out with a full system simulation developed using Matlab Simulink. Models for the following components function within the full system simulation: inertial reference unit (IRU), attitude control system (ACS), reaction wheels, star tracker, and mirror controller. With the spacecraft orbital position and attitude maintained to within specified limits the pointing algorithm receives quaternion, ephemeris, and initialization data that are used to construct the required mirror pointing commands at a 100 Hz rate. This comprehensive simulation will also aid in obtaining a thorough understanding of spacecraft disturbances and other sources of pointing system errors. Parameter sensitivity studies and disturbance analysis will be used to obtain limits of operability for the GIFTS instrument. The culmination of this simulation development and analysis will be used to validate the specified performance requirements outlined for this instrument.

  2. Respiratory-gated segment reconstruction for radiation treatment planning using 256-slice CT-scanner during free breathing

    NASA Astrophysics Data System (ADS)

    Mori, Shinichiro; Endo, Masahiro; Kohno, Ryosuke; Minohara, Shinichi; Kohno, Kazutoshi; Asakura, Hiroshi; Fujiwara, Hideaki; Murase, Kenya

    2005-04-01

    The conventional respiratory-gated CT scan technique includes anatomic motion induced artifacts due to the low temporal resolution. They are a significant source of error in radiotherapy treatment planning for the thorax and upper abdomen. Temporal resolution and image quality are important factors to minimize planning target volume margin due to the respiratory motion. To achieve high temporal resolution and high signal-to-noise ratio, we developed a respiratory gated segment reconstruction algorithm and adapted it to Feldkamp-Davis-Kress algorithm (FDK) with a 256-detector row CT. The 256-detector row CT could scan approximately 100 mm in the cranio-caudal direction with 0.5 mm slice thickness in one rotation. Data acquisition for the RS-FDK relies on the assistance of the respiratory sensing system by a cine scan mode (table remains stationary). We evaluated RS-FDK in phantom study with the 256-detector row CT and compared it with full scan (FS-FDK) and HS-FDK results with regard to volume accuracy and image noise, and finally adapted the RS-FDK to an animal study. The RS-FDK gave a more accurate volume than the others and it had the same signal-to-noise ratio as the FS-FDK. In the animal study, the RS-FDK visualized the clearest edges of the liver and pulmonary vessels of all the algorithms. In conclusion, the RS-FDK algorithm has a capability of high temporal resolution and high signal-to-noise ratio. Therefore it will be useful when combined with new radiotherapy techniques including image guided radiation therapy (IGRT) and 4D radiation therapy.

  3. BODY SIZE-SPECIFIC EFFECTIVE DOSE CONVERSION COEFFICIENTS FOR CT SCANS.

    PubMed

    Romanyukha, Anna; Folio, Les; Lamart, Stephanie; Simon, Steven L; Lee, Choonsik

    2016-12-01

    Effective dose from computed tomography (CT) examinations is usually estimated using the scanner-provided dose-length product and using conversion factors, also known as k-factors, which correspond to scan regions and differ by age according to five categories: 0, 1, 5, 10 y and adult. However, patients often deviate from the standard body size on which the conversion factor is based. In this study, a method for deriving body size-specific k-factors is presented, which can be determined from a simple regression curve based on patient diameter at the centre of the scan range. Using the International Commission on Radiological Protection reference paediatric and adult computational phantoms paired with Monte Carlo simulation of CT X-ray beams, the authors derived a regression-based k-factor model for the following CT scan types: head-neck, head, neck, chest, abdomen, pelvis, abdomen-pelvis (AP) and chest-abdomen-pelvis (CAP). The resulting regression functions were applied to a total of 105 paediatric and 279 adult CT scans randomly sampled from patients who underwent chest, AP and CAP scans at the National Institutes of Health Clinical Center. The authors have calculated and compared the effective doses derived from the conventional age-specific k-factors with the values computed using their body size-specific k-factor. They found that by using the age-specific k-factor, paediatric patients tend to have underestimates (up to 3-fold) of effective dose, while underweight and overweight adult patients tend to have underestimates (up to 2.6-fold) and overestimates (up to 4.6-fold) of effective dose, respectively, compared with the effective dose determined from their body size-dependent factors. The authors present these size-specific k-factors as an alternative to the existing age-specific factors. The body size-specific k-factor will assess effective dose more precisely and on a more individual level than the conventional age-specific k-factors and, hence, improve awareness of the true exposure, which is important for the clinical community to understand. Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  4. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.

  5. Unlocking the spatial inversion of large scanning magnetic microscopy datasets

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.

    2013-12-01

    Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.

  6. Highly efficient router-based readout algorithm for single-photon-avalanche-diode imagers for time-correlated experiments

    NASA Astrophysics Data System (ADS)

    Cominelli, A.; Acconcia, G.; Caldi, F.; Peronio, P.; Ghioni, M.; Rech, I.

    2018-02-01

    Time-Correlated Single Photon Counting (TCSPC) is a powerful tool that permits to record extremely fast optical signals with a precision down to few picoseconds. On the other hand, it is recognized as a relatively slow technique, especially when a large time-resolved image is acquired exploiting a single acquisition channel and a scanning system. During the last years, much effort has been made towards the parallelization of many acquisition and conversion chains. In particular, the exploitation of Single-Photon Avalanche Diodes in standard CMOS technology has paved the way to the integration of thousands of independent channels on the same chip. Unfortunately, the presence of a large number of detectors can give rise to a huge rate of events, which can easily lead to the saturation of the transfer rate toward the elaboration unit. As a result, a smart readout approach is needed to guarantee an efficient exploitation of the limited transfer bandwidth. We recently introduced a novel readout architecture, aimed at maximizing the counting efficiency of the system in typical TCSPC measurements. It features a limited number of high-performance converters, which are shared with a much larger array, while a smart routing logic provides a dynamic multiplexing between the two parts. Here we propose a novel routing algorithm, which exploits standard digital gates distributed among a large 32x32 array to ensure a dynamic connection between detectors and external time-measurement circuits.

  7. A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.

    2013-07-01

    There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less

  8. Power Control and Optimization of Photovoltaic and Wind Energy Conversion Systems

    NASA Astrophysics Data System (ADS)

    Ghaffari, Azad

    Power map and Maximum Power Point (MPP) of Photovoltaic (PV) and Wind Energy Conversion Systems (WECS) highly depend on system dynamics and environmental parameters, e.g., solar irradiance, temperature, and wind speed. Power optimization algorithms for PV systems and WECS are collectively known as Maximum Power Point Tracking (MPPT) algorithm. Gradient-based Extremum Seeking (ES), as a non-model-based MPPT algorithm, governs the system to its peak point on the steepest descent curve regardless of changes of the system dynamics and variations of the environmental parameters. Since the power map shape defines the gradient vector, then a close estimate of the power map shape is needed to create user assignable transients in the MPPT algorithm. The Hessian gives a precise estimate of the power map in a neighborhood around the MPP. The estimate of the inverse of the Hessian in combination with the estimate of the gradient vector are the key parts to implement the Newton-based ES algorithm. Hence, we generate an estimate of the Hessian using our proposed perturbation matrix. Also, we introduce a dynamic estimator to calculate the inverse of the Hessian which is an essential part of our algorithm. We present various simulations and experiments on the micro-converter PV systems to verify the validity of our proposed algorithm. The ES scheme can also be used in combination with other control algorithms to achieve desired closed-loop performance. The WECS dynamics is slow which causes even slower response time for the MPPT based on the ES. Hence, we present a control scheme, extended from Field-Oriented Control (FOC), in combination with feedback linearization to reduce the convergence time of the closed-loop system. Furthermore, the nonlinear control prevents magnetic saturation of the stator of the Induction Generator (IG). The proposed control algorithm in combination with the ES guarantees the closed-loop system robustness with respect to high level parameter uncertainty in the IG dynamics. The simulation results verify the effectiveness of the proposed algorithm.

  9. Constrained VPH+: a local path planning algorithm for a bio-inspired crawling robot with customized ultrasonic scanning sensor.

    PubMed

    Rao, Akshay; Elara, Mohan Rajesh; Elangovan, Karthikeyan

    This paper aims to develop a local path planning algorithm for a bio-inspired, reconfigurable crawling robot. A detailed description of the robotic platform is first provided, and the suitability for deployment of each of the current state-of-the-art local path planners is analyzed after an extensive literature review. The Enhanced Vector Polar Histogram algorithm is described and reformulated to better fit the requirements of the platform. The algorithm is deployed on the robotic platform in crawling configuration and favorably compared with other state-of-the-art local path planning algorithms.

  10. A Bootstrap Metropolis-Hastings Algorithm for Bayesian Analysis of Big Data.

    PubMed

    Liang, Faming; Kim, Jinsu; Song, Qifan

    2016-01-01

    Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively.

  11. A Bootstrap Metropolis–Hastings Algorithm for Bayesian Analysis of Big Data

    PubMed Central

    Kim, Jinsu; Song, Qifan

    2016-01-01

    Markov chain Monte Carlo (MCMC) methods have proven to be a very powerful tool for analyzing data of complex structures. However, their computer-intensive nature, which typically require a large number of iterations and a complete scan of the full dataset for each iteration, precludes their use for big data analysis. In this paper, we propose the so-called bootstrap Metropolis-Hastings (BMH) algorithm, which provides a general framework for how to tame powerful MCMC methods to be used for big data analysis; that is to replace the full data log-likelihood by a Monte Carlo average of the log-likelihoods that are calculated in parallel from multiple bootstrap samples. The BMH algorithm possesses an embarrassingly parallel structure and avoids repeated scans of the full dataset in iterations, and is thus feasible for big data problems. Compared to the popular divide-and-combine method, BMH can be generally more efficient as it can asymptotically integrate the whole data information into a single simulation run. The BMH algorithm is very flexible. Like the Metropolis-Hastings algorithm, it can serve as a basic building block for developing advanced MCMC algorithms that are feasible for big data problems. This is illustrated in the paper by the tempering BMH algorithm, which can be viewed as a combination of parallel tempering and the BMH algorithm. BMH can also be used for model selection and optimization by combining with reversible jump MCMC and simulated annealing, respectively. PMID:29033469

  12. Red to far-red multispectral fluorescence image fusion for detection of fecal contamination on apples

    USDA-ARS?s Scientific Manuscript database

    This research developed a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet/blue LED excitation for detection of fecal contamination on Golden Delicious apples. Using a hyperspectral line-scan imaging system consisting of an EMCCD camera, spectrograph, an...

  13. Hybrid detection of lung nodules on CT scan images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Lin; Tan, Yongqiang; Schwartz, Lawrence H.

    Purpose: The diversity of lung nodules poses difficulty for the current computer-aided diagnostic (CAD) schemes for lung nodule detection on computed tomography (CT) scan images, especially in large-scale CT screening studies. We proposed a novel CAD scheme based on a hybrid method to address the challenges of detection in diverse lung nodules. Methods: The hybrid method proposed in this paper integrates several existing and widely used algorithms in the field of nodule detection, including morphological operation, dot-enhancement based on Hessian matrix, fuzzy connectedness segmentation, local density maximum algorithm, geodesic distance map, and regression tree classification. All of the adopted algorithmsmore » were organized into tree structures with multi-nodes. Each node in the tree structure aimed to deal with one type of lung nodule. Results: The method has been evaluated on 294 CT scans from the Lung Image Database Consortium (LIDC) dataset. The CT scans were randomly divided into two independent subsets: a training set (196 scans) and a test set (98 scans). In total, the 294 CT scans contained 631 lung nodules, which were annotated by at least two radiologists participating in the LIDC project. The sensitivity and false positive per scan for the training set were 87% and 2.61%. The sensitivity and false positive per scan for the testing set were 85.2% and 3.13%. Conclusions: The proposed hybrid method yielded high performance on the evaluation dataset and exhibits advantages over existing CAD schemes. We believe that the present method would be useful for a wide variety of CT imaging protocols used in both routine diagnosis and screening studies.« less

  14. GCOM-W soil moisture and temperature algorithms and validation

    USDA-ARS?s Scientific Manuscript database

    Passive microwave remote sensing of soil moisture has matured over the past decade as a result of the Advanced Microwave Scanning Radiometer (AMSR) program of JAXA. This program has resulted in improved algorithms that have been supported by rigorous validation. Access to the products and the valida...

  15. Edge-following algorithm for tracking geological features

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.

    1977-01-01

    Sequential edge-tracking algorithm employs circular scanning to point permit effective real-time tracking of coastlines and rivers from earth resources satellites. Technique eliminates expensive high-resolution cameras. System might also be adaptable for application in monitoring automated assembly lines, inspecting conveyor belts, or analyzing thermographs, or x ray images.

  16. A survey of the state-of-the-art and focused research in range systems

    NASA Technical Reports Server (NTRS)

    Kung, Yao; Balakrishnan, A. V.

    1988-01-01

    In this one-year renewal of NASA Contract No. 2-304, basic research, development, and implementation in the areas of modern estimation algorithms and digital communication systems have been performed. In the first area, basic study on the conversion of general classes of practical signal processing algorithms into systolic array algorithms is considered, producing four publications. Also studied were the finite word length effects and convergence rates of lattice algorithms, producing two publications. In the second area of study, the use of efficient importance sampling simulation technique for the evaluation of digital communication system performances were studied, producing two publications.

  17. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    NASA Astrophysics Data System (ADS)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  18. Development of visual peak selection system based on multi-ISs normalization algorithm to apply to methamphetamine impurity profiling.

    PubMed

    Lee, Hun Joo; Han, Eunyoung; Lee, Jaesin; Chung, Heesun; Min, Sung-Gi

    2016-11-01

    The aim of this study is to improve resolution of impurity peaks using a newly devised normalization algorithm for multi-internal standards (ISs) and to describe a visual peak selection system (VPSS) for efficient support of impurity profiling. Drug trafficking routes, location of manufacture, or synthetic route can be identified from impurities in seized drugs. In the analysis of impurities, different chromatogram profiles are obtained from gas chromatography and used to examine similarities between drug samples. The data processing method using relative retention time (RRT) calculated by a single internal standard is not preferred when many internal standards are used and many chromatographic peaks present because of the risk of overlapping between peaks and difficulty in classifying impurities. In this study, impurities in methamphetamine (MA) were extracted by liquid-liquid extraction (LLE) method using ethylacetate containing 4 internal standards and analyzed by gas chromatography-flame ionization detection (GC-FID). The newly developed VPSS consists of an input module, a conversion module, and a detection module. The input module imports chromatograms collected from GC and performs preprocessing, which is converted with a normalization algorithm in the conversion module, and finally the detection module detects the impurities in MA samples using a visualized zoning user interface. The normalization algorithm in the conversion module was used to convert the raw data from GC-FID. The VPSS with the built-in normalization algorithm can effectively detect different impurities in samples even in complex matrices and has high resolution keeping the time sequence of chromatographic peaks the same as that of the RRT method. The system can widen a full range of chromatograms so that the peaks of impurities were better aligned for easy separation and classification. The resolution, accuracy, and speed of impurity profiling showed remarkable improvement. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Novel edge treatment method for improving the transmission reconstruction quality in Tomographic Gamma Scanning.

    PubMed

    Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua

    2018-05-01

    Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Investigation of undersampling and reconstruction algorithm dependence on respiratory correlated 4D-MRI for online MR-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Mickevicius, Nikolai J.; Paulson, Eric S.

    2017-04-01

    The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.

  1. TU-A-12A-07: CT-Based Biomarkers to Characterize Lung Lesion: Effects of CT Dose, Slice Thickness and Reconstruction Algorithm Based Upon a Phantom Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, B; Tan, Y; Tsai, W

    2014-06-15

    Purpose: Radiogenomics promises the ability to study cancer tumor genotype from the phenotype obtained through radiographic imaging. However, little attention has been paid to the sensitivity of image features, the image-based biomarkers, to imaging acquisition techniques. This study explores the impact of CT dose, slice thickness and reconstruction algorithm on measuring image features using a thorax phantom. Methods: Twentyfour phantom lesions of known volume (1 and 2mm), shape (spherical, elliptical, lobular and spicular) and density (-630, -10 and +100 HU) were scanned on a GE VCT at four doses (25, 50, 100, and 200 mAs). For each scan, six imagemore » series were reconstructed at three slice thicknesses of 5, 2.5 and 1.25mm with continuous intervals, using the lung and standard reconstruction algorithms. The lesions were segmented with an in-house 3D algorithm. Fifty (50) image features representing lesion size, shape, edge, and density distribution/texture were computed. Regression method was employed to analyze the effect of CT dose, slice of thickness and reconstruction algorithm on these features adjusting 3 confounding factors (size, density and shape of phantom lesions). Results: The coefficients of CT dose, slice thickness and reconstruction algorithm are presented in Table 1 in the supplementary material. No significant difference was found between the image features calculated on low dose CT scans (25mAs and 50mAs). About 50% texture features were found statistically different between low doses and high doses (100 and 200mAs). Significant differences were found for almost all features when calculated on 1.25mm, 2.5mm, and 5mm slice thickness images. Reconstruction algorithms significantly affected all density-based image features, but not morphological features. Conclusions: There is a great need to standardize the CT imaging protocols for radiogenomics study because CT dose, slice thickness and reconstruction algorithm impact quantitative image features to various degrees as our study has shown.« less

  2. Development and evaluation of an articulated registration algorithm for human skeleton registration

    NASA Astrophysics Data System (ADS)

    Yip, Stephen; Perk, Timothy; Jeraj, Robert

    2014-03-01

    Accurate registration over multiple scans is necessary to assess treatment response of bone diseases (e.g. metastatic bone lesions). This study aimed to develop and evaluate an articulated registration algorithm for the whole-body skeleton registration in human patients. In articulated registration, whole-body skeletons are registered by auto-segmenting into individual bones using atlas-based segmentation, and then rigidly aligning them. Sixteen patients (weight = 80-117 kg, height = 168-191 cm) with advanced prostate cancer underwent the pre- and mid-treatment PET/CT scans over a course of cancer therapy. Skeletons were extracted from the CT images by thresholding (HU>150). Skeletons were registered using the articulated, rigid, and deformable registration algorithms to account for position and postural variability between scans. The inter-observers agreement in the atlas creation, the agreement between the manually and atlas-based segmented bones, and the registration performances of all three registration algorithms were all assessed using the Dice similarity index—DSIobserved, DSIatlas, and DSIregister. Hausdorff distance (dHausdorff) of the registered skeletons was also used for registration evaluation. Nearly negligible inter-observers variability was found in the bone atlases creation as the DSIobserver was 96 ± 2%. Atlas-based and manual segmented bones were in excellent agreement with DSIatlas of 90 ± 3%. Articulated (DSIregsiter = 75 ± 2%, dHausdorff = 0.37 ± 0.08 cm) and deformable registration algorithms (DSIregister = 77 ± 3%, dHausdorff = 0.34 ± 0.08 cm) considerably outperformed the rigid registration algorithm (DSIregsiter = 59 ± 9%, dHausdorff = 0.69 ± 0.20 cm) in the skeleton registration as the rigid registration algorithm failed to capture the skeleton flexibility in the joints. Despite superior skeleton registration performance, deformable registration algorithm failed to preserve the local rigidity of bones as over 60% of the skeletons were deformed. Articulated registration is superior to rigid and deformable registrations by capturing global flexibility while preserving local rigidity inherent in skeleton registration. Therefore, articulated registration can be employed to accurately register the whole-body human skeletons, and it enables the treatment response assessment of various bone diseases.

  3. Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Johnston, Richard S.; Melville, C. David; Seibel, Eric J.

    2015-07-01

    As the rapid progress in the development of optoelectronic components and computational power, 3-D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This article proposed a new approach to measure tiny internal 3-D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3-D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm.

  4. An acoustic backscatter thermometer for remotely mapping seafloor water temperature

    NASA Astrophysics Data System (ADS)

    Jackson, Darrell R.; Dworski, J. George

    1992-01-01

    A bottom-mounted, circularly scanning sonar operating at 40 kHz has been used to map changes in water sound speed over a circular region 150 m in diameter. If it is assumed that the salinity remains constant, the change in sound speed can be converted to a change in temperature. For the present system, the spatial resolution is 7.5 m and the temperature resolution is 0.05°C. The technique is based on comparison of successive sonar scans by means of a correlation algorithm. The algorithm is illustrated using data from the Sediment Transport Events on Slopes and Shelves (STRESS) experiment.

  5. Scanning microwave microscopy applied to semiconducting GaAs structures

    NASA Astrophysics Data System (ADS)

    Buchter, Arne; Hoffmann, Johannes; Delvallée, Alexandra; Brinciotti, Enrico; Hapiuk, Dimitri; Licitra, Christophe; Louarn, Kevin; Arnoult, Alexandre; Almuneau, Guilhem; Piquemal, François; Zeier, Markus; Kienberger, Ferry

    2018-02-01

    A calibration algorithm based on one-port vector network analyzer (VNA) calibration for scanning microwave microscopes (SMMs) is presented and used to extract quantitative carrier densities from a semiconducting n-doped GaAs multilayer sample. This robust and versatile algorithm is instrument and frequency independent, as we demonstrate by analyzing experimental data from two different, cantilever- and tuning fork-based, microscope setups operating in a wide frequency range up to 27.5 GHz. To benchmark the SMM results, comparison with secondary ion mass spectrometry is undertaken. Furthermore, we show SMM data on a GaAs p-n junction distinguishing p- and n-doped layers.

  6. Three-dimensional monochromatic x-ray CT

    NASA Astrophysics Data System (ADS)

    Saito, Tsuneo; Kudo, Hiroyuki; Takeda, Tohoru; Itai, Yuji; Tokumori, Kenji; Toyofuku, Fukai; Hyodo, Kazuyuki; Ando, Masami; Nishimura, Ktsuyuki; Uyama, Chikao

    1995-08-01

    In this paper, we describe a 3D computed tomography (3D CT) using monochromatic x-rays generated by synchrotron radiation, which performs a direct reconstruction of 3D volume image of an object from its cone-beam projections. For the develpment of 3D CT, scanning orbit of x-ray source to obtain complete 3D information about an object and corresponding 3D image reconstruction algorithm are considered. Computer simulation studies demonstrate the validities of proposed scanning method and reconstruction algorithm. A prototype experimental system of 3D CT was constructed. Basic phantom examinations and specific material CT image by energy subtraction obtained in this experimental system are shown.

  7. Dental cone-beam CT reconstruction from limited-angle view data based on compressed-sensing (CS) theory for fast, low-dose X-ray imaging

    NASA Astrophysics Data System (ADS)

    Je, Uikyu; Cho, Hyosung; Lee, Minsik; Oh, Jieun; Park, Yeonok; Hong, Daeki; Park, Cheulkyu; Cho, Heemoon; Choi, Sungil; Koo, Yangseo

    2014-06-01

    Recently, reducing radiation doses has become an issue of critical importance in the broader radiological community. As a possible technical approach, especially, in dental cone-beam computed tomography (CBCT), reconstruction from limited-angle view data (< 360°) would enable fast scanning with reduced doses to the patient. In this study, we investigated and implemented an efficient reconstruction algorithm based on compressed-sensing (CS) theory for the scan geometry and performed systematic simulation works to investigate the image characteristics. We also performed experimental works by applying the algorithm to a commercially-available dental CBCT system to demonstrate its effectiveness for image reconstruction in incomplete data problems. We successfully reconstructed CBCT images with incomplete projections acquired at selected scan angles of 120, 150, 180, and 200° with a fixed angle step of 1.2° and evaluated the reconstruction quality quantitatively. Both simulation and experimental demonstrations of the CS-based reconstruction from limited-angle view data show that the algorithm can be applied directly to current dental CBCT systems for reducing the imaging doses and further improving the image quality.

  8. Multimodal Registration of White Matter Brain Data via Optimal Mass Transport.

    PubMed

    Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L; Kikinis, Ron; Tannenbaum, Allen

    2008-09-01

    The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A . Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets.

  9. Multimodal Registration of White Matter Brain Data via Optimal Mass Transport

    PubMed Central

    Rehman, Tauseefur; Haber, Eldad; Pohl, Kilian M.; Haker, Steven; Halle, Mike; Talos, Florin; Wald, Lawrence L.; Kikinis, Ron; Tannenbaum, Allen

    2017-01-01

    The elastic registration of medical scans from different acquisition sequences is becoming an important topic for many research labs that would like to continue the post-processing of medical scans acquired via the new generation of high-field-strength scanners. In this note, we present a parameter-free registration algorithm that is well suited for this scenario as it requires no tuning to specific acquisition sequences. The algorithm encompasses a new numerical scheme for computing elastic registration maps based on the minimizing flow approach to optimal mass transport. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. We apply the algorithm to register the white matter folds of two different scans and use the results to parcellate the cortex of the target image. To the best of our knowledge, this is the first time that the optimal mass transport function has been applied to register large 3D multimodal data sets. PMID:28626844

  10. biobambam: tools for read pair collation based algorithms on BAM files

    PubMed Central

    2014-01-01

    Background Sequence alignment data is often ordered by coordinate (id of the reference sequence plus position on the sequence where the fragment was mapped) when stored in BAM files, as this simplifies the extraction of variants between the mapped data and the reference or of variants within the mapped data. In this order paired reads are usually separated in the file, which complicates some other applications like duplicate marking or conversion to the FastQ format which require to access the full information of the pairs. Results In this paper we introduce biobambam, a set of tools based on the efficient collation of alignments in BAM files by read name. The employed collation algorithm avoids time and space consuming sorting of alignments by read name where this is possible without using more than a specified amount of main memory. Using this algorithm tasks like duplicate marking in BAM files and conversion of BAM files to the FastQ format can be performed very efficiently with limited resources. We also make the collation algorithm available in the form of an API for other projects. This API is part of the libmaus package. Conclusions In comparison with previous approaches to problems involving the collation of alignments by read name like the BAM to FastQ or duplication marking utilities our approach can often perform an equivalent task more efficiently in terms of the required main memory and run-time. Our BAM to FastQ conversion is faster than all widely known alternatives including Picard and bamUtil. Our duplicate marking is about as fast as the closest competitor bamUtil for small data sets and faster than all known alternatives on large and complex data sets.

  11. Two-dimensional thermography image retrieval from zig-zag scanned data with TZ-SCAN

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Yamasaki, Ryohei; Arai, Kohei

    2008-10-01

    TZ-SCAN is a simple and low cost thermal imaging device which consists of a single point radiation thermometer on a tripod with a pan-tilt rotator, a DC motor controller board with a USB interface, and a laptop computer for rotator control, data acquisition, and data processing. TZ-SCAN acquires a series of zig-zag scanned data and stores the data as CSV file. A 2-D thermal distribution image can be retrieved by using the second quefrency peak calculated from TZ-SCAN data. An experiment is conducted to confirm the validity of the thermal retrieval algorithm. The experimental result shows efficient accuracy for 2-D thermal distribution image retrieval.

  12. Automated Discovery of Long Intergenic RNAs Associated with Breast Cancer Progression

    DTIC Science & Technology

    2012-02-01

    manuscript in preparation), (2) development and publication of an algorithm for detecting gene fusions in RNA-Seq data [1], and (3) discovery of outlier long...subjected to de novo assembly algorithms to discover novel transcripts representing either unannotated genes or novel somatic mutations such as gene...fusions. To this end the P.I. developed and published a novel algorithm called ChimeraScan to facilitate the discovery and validation of gene

  13. Processing Of Binary Images

    NASA Astrophysics Data System (ADS)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  14. Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.

    PubMed

    Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting

    2012-09-01

    In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.

  15. Coalescent Times and Patterns of Genetic Diversity in Species with Facultative Sex: Effects of Gene Conversion, Population Structure, and Heterogeneity

    PubMed Central

    Hartfield, Matthew; Wright, Stephen I.; Agrawal, Aneil F.

    2016-01-01

    Many diploid organisms undergo facultative sexual reproduction. However, little is currently known concerning the distribution of neutral genetic variation among facultative sexual organisms except in very simple cases. Understanding this distribution is important when making inferences about rates of sexual reproduction, effective population size, and demographic history. Here we extend coalescent theory in diploids with facultative sex to consider gene conversion, selfing, population subdivision, and temporal and spatial heterogeneity in rates of sex. In addition to analytical results for two-sample coalescent times, we outline a coalescent algorithm that accommodates the complexities arising from partial sex; this algorithm can be used to generate multisample coalescent distributions. A key result is that when sex is rare, gene conversion becomes a significant force in reducing diversity within individuals. This can reduce genomic signatures of infrequent sex (i.e., elevated within-individual allelic sequence divergence) or entirely reverse the predicted patterns. These models offer improved methods for assessing null patterns of molecular variation in facultative sexual organisms. PMID:26584902

  16. Coalescent Times and Patterns of Genetic Diversity in Species with Facultative Sex: Effects of Gene Conversion, Population Structure, and Heterogeneity.

    PubMed

    Hartfield, Matthew; Wright, Stephen I; Agrawal, Aneil F

    2016-01-01

    Many diploid organisms undergo facultative sexual reproduction. However, little is currently known concerning the distribution of neutral genetic variation among facultative sexual organisms except in very simple cases. Understanding this distribution is important when making inferences about rates of sexual reproduction, effective population size, and demographic history. Here we extend coalescent theory in diploids with facultative sex to consider gene conversion, selfing, population subdivision, and temporal and spatial heterogeneity in rates of sex. In addition to analytical results for two-sample coalescent times, we outline a coalescent algorithm that accommodates the complexities arising from partial sex; this algorithm can be used to generate multisample coalescent distributions. A key result is that when sex is rare, gene conversion becomes a significant force in reducing diversity within individuals. This can reduce genomic signatures of infrequent sex (i.e., elevated within-individual allelic sequence divergence) or entirely reverse the predicted patterns. These models offer improved methods for assessing null patterns of molecular variation in facultative sexual organisms. Copyright © 2016 by the Genetics Society of America.

  17. Compressed-sensing wavenumber-scanning interferometry

    NASA Astrophysics Data System (ADS)

    Bai, Yulei; Zhou, Yanzhou; He, Zhaoshui; Ye, Shuangli; Dong, Bo; Xie, Shengli

    2018-01-01

    The Fourier transform (FT), the nonlinear least-squares algorithm (NLSA), and eigenvalue decomposition algorithm (EDA) are used to evaluate the phase field in depth-resolved wavenumber-scanning interferometry (DRWSI). However, because the wavenumber series of the laser's output is usually accompanied by nonlinearity and mode-hop, FT, NLSA, and EDA, which are only suitable for equidistant interference data, often lead to non-negligible phase errors. In this work, a compressed-sensing method for DRWSI (CS-DRWSI) is proposed to resolve this problem. By using the randomly spaced inverse Fourier matrix and solving the underdetermined equation in the wavenumber domain, CS-DRWSI determines the nonuniform sampling and spectral leakage of the interference spectrum. Furthermore, it can evaluate interference data without prior knowledge of the object. The experimental results show that CS-DRWSI improves the depth resolution and suppresses sidelobes. It can replace the FT as a standard algorithm for DRWSI.

  18. Automated Processing of 2-D Gel Electrophoretograms of Genomic DNA for Hunting Pathogenic DNA Molecular Changes.

    PubMed

    Takahashi; Nakazawa; Watanabe; Konagaya

    1999-01-01

    We have developed the automated processing algorithms for 2-dimensional (2-D) electrophoretograms of genomic DNA based on RLGS (Restriction Landmark Genomic Scanning) method, which scans the restriction enzyme recognition sites as the landmark and maps them onto a 2-D electrophoresis gel. Our powerful processing algorithms realize the automated spot recognition from RLGS electrophoretograms and the automated comparison of a huge number of such images. In the final stage of the automated processing, a master spot pattern, on which all the spots in the RLGS images are mapped at once, can be obtained. The spot pattern variations which seemed to be specific to the pathogenic DNA molecular changes can be easily detected by simply looking over the master spot pattern. When we applied our algorithms to the analysis of 33 RLGS images derived from human colon tissues, we successfully detected several colon tumor specific spot pattern changes.

  19. Robust Adaptive Thresholder For Document Scanning Applications

    NASA Astrophysics Data System (ADS)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  20. Textural Evolution During Micro Direct Metal Deposition of NiTi Alloy

    NASA Astrophysics Data System (ADS)

    Khademzadeh, Saeed; Bariani, Paolo F.; Bruschi, Stefania

    2018-03-01

    In this research, a micro direct metal deposition process, newly developed as a potential method for micro additive manufacturing was used to fabricate NiTi builds. The effect of scanning strategy on grain growth and textural evolution was investigated using scanning electron microscope equipped with electron backscattered diffraction detector. Investigations showed that, the angle between the successive single tracks has an important role in grain size distribution and textural evolution of NiTi phase. Unidirectional laser beam scanning pattern developed a fiber texture; conversely, a backward and forward scanning pattern developed a strong < {100} > ‖‖ RD texture on the surface of NiTi cubic samples produced by micro direct metal deposition.

  1. Textural Evolution During Micro Direct Metal Deposition of NiTi Alloy

    NASA Astrophysics Data System (ADS)

    Khademzadeh, Saeed; Bariani, Paolo F.; Bruschi, Stefania

    2018-07-01

    In this research, a micro direct metal deposition process, newly developed as a potential method for micro additive manufacturing was used to fabricate NiTi builds. The effect of scanning strategy on grain growth and textural evolution was investigated using scanning electron microscope equipped with electron backscattered diffraction detector. Investigations showed that, the angle between the successive single tracks has an important role in grain size distribution and textural evolution of NiTi phase. Unidirectional laser beam scanning pattern developed a fiber texture; conversely, a backward and forward scanning pattern developed a strong < {100} > ‖‖ RD texture on the surface of NiTi cubic samples produced by micro direct metal deposition.

  2. A Demons algorithm for image registration with locally adaptive regularization.

    PubMed

    Cahill, Nathan D; Noble, J Alison; Hawkes, David J

    2009-01-01

    Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.

  3. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*

    PubMed Central

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-01-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122

  4. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.

    PubMed

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-02-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.

  5. Ensemble LUT classification for degraded document enhancement

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady; Frieder, Ophir

    2008-01-01

    The fast evolution of scanning and computing technologies have led to the creation of large collections of scanned paper documents. Examples of such collections include historical collections, legal depositories, medical archives, and business archives. Moreover, in many situations such as legal litigation and security investigations scanned collections are being used to facilitate systematic exploration of the data. It is almost always the case that scanned documents suffer from some form of degradation. Large degradations make documents hard to read and substantially deteriorate the performance of automated document processing systems. Enhancement of degraded document images is normally performed assuming global degradation models. When the degradation is large, global degradation models do not perform well. In contrast, we propose to estimate local degradation models and use them in enhancing degraded document images. Using a semi-automated enhancement system we have labeled a subset of the Frieder diaries collection.1 This labeled subset was then used to train an ensemble classifier. The component classifiers are based on lookup tables (LUT) in conjunction with the approximated nearest neighbor algorithm. The resulting algorithm is highly effcient. Experimental evaluation results are provided using the Frieder diaries collection.1

  6. The Impacts of Bowtie Effect and View Angle Discontinuity on MODIS Swath Data Gridding

    NASA Technical Reports Server (NTRS)

    Wang, Yujie; Lyapustin, Alexei

    2007-01-01

    We have analyzed two effects of the MODIS viewing geometry on the quality of gridded imagery. First, the fact that the MODIS scans a swath of the Earth 10 km wide at nadir, causes abrupt change of the view azimuth angle at the boundary of adjacent scans. This discontinuity appears as striping of the image clearly visible in certain cases with viewing geometry close to principle plane over the snow of the glint area of water. The striping is a true surface Bi-directional Reflectance Factor (BRF) effect and should be preserved during gridding. Second, due to bowtie effect, the observations in adjacent scans overlap each other. Commonly used method of calculating grid cell value by averaging all overlapping observations may result in smearing of the image. This paper describes a refined gridding algorithm that takes the above two effects into account. By calculating the grid cell value by averaging the overlapping observations from a single scan, the new algorithm preserves the measured BRF signal and enhances sharpness of the image.

  7. Mapping Daily and Maximum Flood Extents at 90-m Resolution During Hurricanes Harvey and Irma Using Passive Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Galantowicz, J. F.; Picton, J.; Root, B.

    2017-12-01

    Passive microwave remote sensing can provided a distinct perspective on flood events by virtue of wide sensor fields of view, frequent observations from multiple satellites, and sensitivity through clouds and vegetation. During Hurricanes Harvey and Irma, we used AMSR2 (Advanced Microwave Scanning Radiometer 2, JAXA) data to map flood extents starting from the first post-storm rain-free sensor passes. Our standard flood mapping algorithm (FloodScan) derives flooded fraction from 22-km microwave data (AMSR2 or NASA's GMI) in near real time and downscales it to 90-m resolution using a database built from topography, hydrology, and Global Surface Water Explorer data and normalized to microwave data footprint shapes. During Harvey and Irma we tested experimental versions of the algorithm designed to map the maximum post-storm flood extent rapidly and made a variety of map products available immediately for use in storm monitoring and response. The maps have several unique features including spanning the entire storm-affected area and providing multiple post-storm updates as flood water shifted and receded. From the daily maps we derived secondary products such as flood duration, maximum flood extent (Figure 1), and flood depth. In this presentation, we describe flood extent evolution, maximum extent, and local details as detected by the FloodScan algorithm in the wake of Harvey and Irma. We compare FloodScan results to other available flood mapping resources, note observed shortcomings, and describe improvements made in response. We also discuss how best-estimate maps could be updated in near real time by merging FloodScan products and data from other remote sensing systems and hydrological models.

  8. Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina

    NASA Astrophysics Data System (ADS)

    An, Lin; Shen, Tueng T.; Wang, Ruikang K.

    2011-10-01

    This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.

  9. SAR Processing Based On Two-Dimensional Transfer Function

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung; Jin, Michael Y.; Curlander, John C.

    1994-01-01

    Exact transfer function, ETF, is two-dimensional transfer function that constitutes basis of improved frequency-domain-convolution algorithm for processing synthetic-aperture-radar, SAR data. ETF incorporates terms that account for Doppler effect of motion of radar relative to scanned ground area and for antenna squint angle. Algorithm based on ETF outperforms others.

  10. Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2011-06-01

    With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.

  11. Using video-oriented instructions to speed up sequence comparison.

    PubMed

    Wozniak, A

    1997-04-01

    This document presents an implementation of the well-known Smith-Waterman algorithm for comparison of proteic and nucleic sequences, using specialized video instructions. These instructions, SIMD-like in their design, make possible parallelization of the algorithm at the instruction level. Benchmarks on an ULTRA SPARC running at 167 MHz show a speed-up factor of two compared to the same algorithm implemented with integer instructions on the same machine. Performance reaches over 18 million matrix cells per second on a single processor, giving to our knowledge the fastest implementation of the Smith-Waterman algorithm on a workstation. The accelerated procedure was introduced in LASSAP--a LArge Scale Sequence compArison Package software developed at INRIA--which handles parallelism at higher level. On a SUN Enterprise 6000 server with 12 processors, a speed of nearly 200 million matrix cells per second has been obtained. A sequence of length 300 amino acids is scanned against SWISSPROT R33 (1,8531,385 residues) in 29 s. This procedure is not restricted to databank scanning. It applies to all cases handled by LASSAP (intra- and inter-bank comparisons, Z-score computation, etc.

  12. A new scanning device in CT with dose reduction potential

    NASA Astrophysics Data System (ADS)

    Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph

    2006-03-01

    The amount of x-ray radiation currently applied in CT practice is not utilized optimally. A portion of radiation traversing the patient is either not detected at all or is used ineffectively. The reason lies partly in the reconstruction algorithms and partly in the geometry of the CT scanners designed specifically for these algorithms. In fact, the reconstruction methods widely used in CT are intended to invert the data that correspond to ideal straight lines. However, the collection of such data is often not accurate due to likely movement of the source/detector system of the scanner in the time interval during which all the detectors are read. In this paper, a new design of the scanner geometry is proposed that is immune to the movement of the CT system and will collect all radiation traversing the patient. The proposed scanning design has a potential to reduce the patient dose by a factor of two. Furthermore, it can be used with the existing reconstruction algorithm and it is particularly suitable for OPED, a new robust reconstruction algorithm.

  13. Intra-patient comparison of reduced-dose model-based iterative reconstruction with standard-dose adaptive statistical iterative reconstruction in the CT diagnosis and follow-up of urolithiasis.

    PubMed

    Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth; Vardhanabhuti, Varut; Stuckey, Colin; Gutteridge, Catherine; Hyde, Christopher; Roobottom, Carl

    2017-10-01

    To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. • MBIR allows reduced CT dose with similar diagnostic accuracy • MBIR outperforms ASIR when used for the reconstruction of reduced-dose scans • MBIR can be used to accurately assess stones 3 mm and above.

  14. Registration of central paths and colonic polyps between supine and prone scans in computed tomography colonography: Pilot study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Ping; Napel, Sandy; Acar, Burak

    2004-10-01

    Computed tomography colonography (CTC) is a minimally invasive method that allows the evaluation of the colon wall from CT sections of the abdomen/pelvis. The primary goal of CTC is to detect colonic polyps, precursors to colorectal cancer. Because imperfect cleansing and distension can cause portions of the colon wall to be collapsed, covered with water, and/or covered with retained stool, patients are scanned in both prone and supine positions. We believe that both reading efficiency and computer aided detection (CAD) of CTC images can be improved by accurate registration of data from the supine and prone positions. We developed amore » two-stage approach that first registers the colonic central paths using a heuristic and automated algorithm and then matches polyps or polyp candidates (CAD hits) by a statistical approach. We evaluated the registration algorithm on 24 patient cases. After path registration, the mean misalignment distance between prone and supine identical anatomic landmarks was reduced from 47.08 to 12.66 mm, a 73% improvement. The polyp registration algorithm was specifically evaluated using eight patient cases for which radiologists identified polyps separately for both supine and prone data sets, and then manually registered corresponding pairs. The algorithm correctly matched 78% of these pairs without user input. The algorithm was also applied to the 30 highest-scoring CAD hits in the prone and supine scans and showed a success rate of 50% in automatically registering corresponding polyp pairs. Finally, we computed the average number of CAD hits that need to be manually compared in order to find the correct matches among the top 30 CAD hits. With polyp registration, the average number of comparisons was 1.78 per polyp, as opposed to 4.28 comparisons without polyp registration.« less

  15. Evaluation of the operational SAR based Baltic sea ice concentration products

    NASA Astrophysics Data System (ADS)

    Karvonen, Juha

    Sea ice concentration is an important ice parameter both for weather and climate modeling and sea ice navigation. We have developed an fully automated algorithm for sea ice concentration retrieval using dual-polarized ScanSAR wide mode RADARSAT-2 data. RADARSAT-2 is a C-band SAR instrument enabling dual-polarized acquisition in ScanSAR mode. The swath width for the RADARSAT-2 ScanSAR mode is about 500 km, making it very suitable for operational sea ice monitoring. The polarization combination used in our concentration estimation is HH/HV. The SAR data is first preprocessed, the preprocessing consists of geo-rectification to Mercator projection, incidence angle correction fro both the polarization channels. and SAR mosaicking. After preprocessing a segmentation is performed for the SAR mosaics, and some single-channel and dual-channel features are computed for each SAR segment. Finally the SAR concentration is estimated based on these segment-wise features. The algorithm is similar as introduced in Karvonen 2014. The ice concentration is computed daily using a daily RADARSAT-2 SAR mosaic as its input, and it thus gives the concentration estimated at each Baltic Sea location based on the most recent SAR data at the location. The algorithm has been run in an operational test mode since January 2014. We present evaluation of the SAR-based concentration estimates for the Baltic ice season 2014 by comparing the SAR results with gridded the Finnish Ice Service ice charts and ice concentration estimates from a radiometer algorithm (AMSR-2 Bootstrap algorithm results). References: J. Karvonen, Baltic Sea Ice Concentration Estimation Based on C-Band Dual-Polarized SAR Data, IEEE Transactions on Geoscience and Remote Sensing, in press, DOI: 10.1109/TGRS.2013.2290331, 2014.

  16. Contrast features of breast cancer in frequency-domain laser scanning mammography

    NASA Astrophysics Data System (ADS)

    Moesta, K. Thomas; Fantini, Sergio; Jess, Helge; Totkas, Susan; Franceschini, Maria-Angela; Kaschke, Michael; Schlag, Peter M.

    1998-04-01

    Frequency-domain optical mammography has been advocated to improve contrast and thus cancer detectability in breast transillumination. To the best of our knowledge, this report provides the first systematic clinical results of a frequency-domain laser scanning mammograph (FLM). The instrument provides monochromatic light at 690 and 810 nm, whose intensity is modulated at 110.0008 MHz, respectively. The breast is scanned by stepwise positioning of source and detector, and amplitude and phase for both wavelengths are measured by a photomultiplier tube using heterodyne detection. Images are formed representing amplitude or phase data on linear gray scales. Furthermore, various algorithms carrying on more than one signal were essayed. Twenty visible cancers out of 25 cancers in the first 59 investigations were analyzed for their quantitative contrast with respect to the whole breast or to defined reference areas. Contrast definitions refer to the signal itself, to the signal noise, or were based on nonparametric comparison. The amplitude signal provides better contrast than the phase signal. Ratio images between red and IR amplitudes gave variable results; in some cases the tumor contrast was canceled. The algorithms to determine (mu) a and (mu) sPRM from amplitude and phase data did not significantly improve upon objective contrast. The N algorithm, using the phase signal to flatten the amplitude signal did significantly improve upon contrast according to contrast definitions 1 and 2, however, did not improve upon nonparametric contrast. Thus, with the current instrumentation, the phase signal is helpful to correct for the complex and variable geometry of the breast. However, an independent informational content for tumor differentiation could not be determined. The flat field algorithm did greatly enhance optical contrast in comparison with amplitude or amplitude ratio images. Further evaluation of FLM will have to be based on the N-algorithm images.

  17. Three-step sequential positioning algorithm during sonographic evaluation for appendicitis increases appendiceal visualization rate and reduces CT use.

    PubMed

    Chang, Stephanie T; Jeffrey, R Brooke; Olcott, Eric W

    2014-11-01

    The purpose of this article is to examine the rates of appendiceal visualization by sonography, imaging-based diagnoses of appendicitis, and CT use after appendiceal sonography, before and after the introduction of a sonographic algorithm involving sequential changes in patient positioning. We used a search engine to retrospectively identify patients who underwent graded-compression sonography for suspected appendicitis during 6-month periods before (period 1; 419 patients) and after (period 2; 486 patients) implementation of a new three-step positional sonographic algorithm. The new algorithm included initial conventional supine scanning and, as long as the appendix remained nonvisualized, left posterior oblique scanning and then "second-look" supine scanning. Abdominal CT within 7 days after sonography was recorded. Between periods 1 and 2, appendiceal visualization on sonography increased from 31.0% to 52.5% (p < 0.001), postsonography CT use decreased from 31.3% to 17.7% (p < 0.001), and the proportion of imaging-based diagnoses of appendicitis made by sonography increased from 63.8% to 85.7% (p = 0.002). The incidence of appendicitis diagnosed by imaging (either sonography or CT) remained similar at 16.5% and 17.3%, respectively (p = 0.790). Sensitivity and overall accuracy were 57.8% (95% CI, 44.8-70.1%) and 93.0% (95% CI, 90.1-95.3%), respectively, in period 1 and 76.5% (95% CI, 65.8-85.2%) and 95.4% (95% CI, 93.1-97.1%), respectively, in period 2. Similar findings were observed for adults and children. Implementation of an ultrasound algorithm with sequential positioning significantly improved the appendiceal visualization rate and the proportion of imaging-based diagnoses of appendicitis made by ultrasound, enabling a concomitant decrease in abdominal CT use in both children and adults.

  18. SU-F-T-407: Artifact Reduction with Dual Energy Or IMAR: Who’s Winning?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elder, E; Schreibmann, E; Dhabaan, A

    2016-06-15

    Purpose: The purpose of this abstract was to evaluate the performance of commercial strategies for artifact reduction in radiation oncology settings. The iterative metal artifact reduction (Siemens iMAR) algorithm and monoenergetic virtual datasets reconstructed from dual energy scans are compared side-by-side in their ability to image in the presence of metal inserts. Methods: A CIRS ATOM Dosimetry Verification Phantom was scanned with and without a metal insert on a SOMATOM Definition AS dual energy scanner. Images with the metal insert were reconstructed with (a) a tradition single energy CT scan with the iMAR option implemented, using different artifact reduction settingsmore » and (b) a monoenergetic scan calculated from dual energy scans by recovering differences in the energy-dependence of the attenuation coefficients of different materials and then creating a virtual monoenergetic scan from these coefficients. The iMAR and monoenergetic scans were then compared with the metal-free scan to assess changes in HU numbers and noise within a region around the metal insert. Results: Both the iMAR and dual energy scans reduced artifacts produced by the metal insert. However the iMAR results are dependent of the selected algorithm settings, with a mean HU difference ranging from 0.65 to 90.40 for different options. The mean differences without the iMAR correction were 38.74. When using the dual energy scan, the mean differences were 4.53, that is however attributed to increased noise and not artifacts, as the dual energy scan had the lowest skewness (2.52) compared to the iMAR scans (ranging from 3.90 to 4.88) and the lowest kurtosis (5.72 for dual energy, range of 18.19 to 27.36 for iMAR). Conclusion: Both approaches accurately recovered HU numbers, however the dual energy method provided smaller residual artifacts.« less

  19. A segmentation algorithm based on image projection for complex text layout

    NASA Astrophysics Data System (ADS)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  20. Automatic lesion tracking for a PET/CT based computer aided cancer therapy monitoring system

    NASA Astrophysics Data System (ADS)

    Opfer, Roland; Brenner, Winfried; Carlsen, Ingwer; Renisch, Steffen; Sabczynski, Jörg; Wiemker, Rafael

    2008-03-01

    Response assessment of cancer therapy is a crucial component towards a more effective and patient individualized cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with functional information. However, dealing simultaneously with several PET/CT scans poses a serious workflow problem. It can be a difficult and tedious task to extract response criteria based upon an integrated analysis of PET and CT images and to track these criteria over time. In order to improve the workflow for serial analysis of PET/CT scans we introduce in this paper a fast lesion tracking algorithm. We combine a global multi-resolution rigid registration algorithm with a local block matching and a local region growing algorithm. Whenever the user clicks on a lesion in the base-line PET scan the course of standardized uptake values (SUV) is automatically identified and shown to the user as a graph plot. We have validated our method by a data collection from 7 patients. Each patient underwent two or three PET/CT scans during the course of a cancer therapy. An experienced nuclear medicine physician manually measured the courses of the maximum SUVs for altogether 18 lesions. As a result we obtained that the automatic detection of the corresponding lesions resulted in SUV measurements which are nearly identical to the manually measured SUVs. Between 38 measured maximum SUVs derived from manual and automatic detected lesions we observed a correlation of 0.9994 and a average error of 0.4 SUV units.

  1. Automated aortic calcification detection in low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.

  2. Reliability and accuracy of sleep apnea scans in novel cardiac resynchronization therapy devices: an independent report of two cases.

    PubMed

    Fox, Henrik; Nölker, Georg; Gutleben, Klaus-Jürgen; Bitter, Thomas; Horstkotte, Dieter; Oldenburg, Olaf

    2014-03-01

    Pacemaker apnea scan algorithms are able to screen for sleep apnea. We investigated whether these systems were able to accurately detect sleep-disordered breathing (SDB) in two patients from an outpatient clinic. The first patient suffered from ischemic heart failure and severe central sleep apnea (CSA) and underwent adaptive servoventilation therapy (ASV). The second patient suffered from dilated cardiomyopathy and moderate obstructive sleep apnea (OSA). Pacemaker read-outs did not match polysomnography (PSG) recordings well and overestimated the apnea-hypopnea index. However, ASV therapy-induced SDB improvements were adequately recognized by the apnea scan of the Boston Scientific INVIVE® cardiac resynchronization therapy pacemaker. Detection of obstructive respiratory events using impedance-based technology may underestimate the number of events, as frustrane breathing efforts induce impedance changes without significant airflow. By contrast, in the second case, apnea scan overestimated the number of total events and of obstructive events, perhaps owing to a very sensitive but less specific hypopnea definition and detection within the diagnostic algorithm of the device. These two cases show that a pacemaker apnea scan is able to reflect SDB, but PSG precision is not met by far. The device scan revealed the decline of SDB through ASV therapy for CSA in one patient, but not for OSA in the second case. To achieve reliable monitoring of SDB, further technical developments and clinical studies are necessary.

  3. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Hao; Folkerts, Michael; Jiang, Steve B., E-mail: xun.jia@utsouthwestern.edu, E-mail: steve.jiang@UTSouthwestern.edu

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is inventedmore » to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase. Conclusions: High-quality 4D-CBCT imaging based on the clinically standard 1-min 3D CBCT scanning protocol is feasible via the proposed hybrid reconstruction algorithm.« less

  4. Four-dimensional volume-of-interest reconstruction for cone-beam computed tomography-guided radiation therapy.

    PubMed

    Ahmad, Moiz; Balter, Peter; Pan, Tinsu

    2011-10-01

    Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4-6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3-8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts.

  5. Four-dimensional volume-of-interest reconstruction for cone-beam computed tomography-guided radiation therapy

    PubMed Central

    Ahmad, Moiz; Balter, Peter; Pan, Tinsu

    2011-01-01

    Purpose: Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4–6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. Methods: The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Results: Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3–8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. Conclusions: 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts. PMID:21992381

  6. Mode conversion between Alfvén wave eigenmodes in axially inhomogeneous two-ion-species plasmas

    NASA Astrophysics Data System (ADS)

    Roberts, D. R.; Hershkowitz, N.; Tataronis, J. A.

    1990-04-01

    The uniform cylindrical plasma model of Litwin and Hershkowitz [Phys. Fluids 30, 1323 (1987)] is shown to predict mode conversion between the lowest radial order m=+1 fast magnetosonic surface and slow ion-cyclotron global eigenmodes of the Alfvén wave at the light-ion species Alfvén resonance of a cold two-ion plasma. A hydrogen (h)-deuterium (d) plasma is examined in experiments. The fast mode is efficiently excited by a rotating field antenna array at ω˜Ωh in the central cell of the Phaedrus-B tandem mirror [Phys. Rev. Lett. 51, 1955(1983)]. Radially scanned magnetic probes observe the propagating eigenmode wave fields within a shallow central cell magnetic gradient in which the conversion zone is axially localized according to nd/nh. A low radial-order slow ion-cyclotron mode, observed in the vicinity of the conversion zone, gives evidence for the predicted mode conversion.

  7. Optimization of process factors for self-healing vanadium-based conversion coating on AZ31 magnesium alloy

    NASA Astrophysics Data System (ADS)

    Li, Kun; Liu, Junyao; Lei, Ting; Xiao, Tao

    2015-10-01

    A self-healing vanadium-based conversion coating was prepared on AZ31 magnesium alloy. The optimum operating conditions including vanadia solution concentration, pH and treating temperature for obtaining the best corrosion protective vanadia coatings and improved localized corrosion resistance to the magnesium substrate were determined by an orthogonal experiment design. Surface morphology and composition of the resultant conversion coatings were investigated by scanning electron microscope (SEM) and X-ray photoelectron spectroscopy (XPS). The self-healing behavior of the coating was investigated by cross-cut immersion test and electrochemical impedance spectroscopy (EIS) measurements in 3.5% NaCl solution.

  8. Novel multimodality segmentation using level sets and Jensen-Rényi divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva

    2013-12-15

    Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less

  9. Novel multimodality segmentation using level sets and Jensen-Rényi divergence.

    PubMed

    Markel, Daniel; Zaidi, Habib; El Naqa, Issam

    2013-12-01

    Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.

  10. Quantitative Image Quality and Histogram-Based Evaluations of an Iterative Reconstruction Algorithm at Low-to-Ultralow Radiation Dose Levels: A Phantom Study in Chest CT

    PubMed Central

    Lee, Ki Baek

    2018-01-01

    Objective To describe the quantitative image quality and histogram-based evaluation of an iterative reconstruction (IR) algorithm in chest computed tomography (CT) scans at low-to-ultralow CT radiation dose levels. Materials and Methods In an adult anthropomorphic phantom, chest CT scans were performed with 128-section dual-source CT at 70, 80, 100, 120, and 140 kVp, and the reference (3.4 mGy in volume CT Dose Index [CTDIvol]), 30%-, 60%-, and 90%-reduced radiation dose levels (2.4, 1.4, and 0.3 mGy). The CT images were reconstructed by using filtered back projection (FBP) algorithms and IR algorithm with strengths 1, 3, and 5. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were statistically compared between different dose levels, tube voltages, and reconstruction algorithms. Moreover, histograms of subtraction images before and after standardization in x- and y-axes were visually compared. Results Compared with FBP images, IR images with strengths 1, 3, and 5 demonstrated image noise reduction up to 49.1%, SNR increase up to 100.7%, and CNR increase up to 67.3%. Noteworthy image quality degradations on IR images including a 184.9% increase in image noise, 63.0% decrease in SNR, and 51.3% decrease in CNR, and were shown between 60% and 90% reduced levels of radiation dose (p < 0.0001). Subtraction histograms between FBP and IR images showed progressively increased dispersion with increased IR strength and increased dose reduction. After standardization, the histograms appeared deviated and ragged between FBP images and IR images with strength 3 or 5, but almost normally-distributed between FBP images and IR images with strength 1. Conclusion The IR algorithm may be used to save radiation doses without substantial image quality degradation in chest CT scanning of the adult anthropomorphic phantom, down to approximately 1.4 mGy in CTDIvol (60% reduced dose). PMID:29354008

  11. Comparison of the MPP with other supercomputers for LANDSAT data processing

    NASA Technical Reports Server (NTRS)

    Ozga, Martin

    1987-01-01

    The massively parallel processor is compared to the CRAY X-MP and the CYBER-205 for LANDSAT data processing. The maximum likelihood classification algorithm is the basis for comparison since this algorithm is simple to implement and vectorizes very well. The algorithm was implemented on all three machines and tested by classifying the same full scene of LANDSAT multispectral scan data. Timings are compared as well as features of the machines and available software.

  12. Semiannual Report, April 1, 1989 through September 30, 1989 (Institute for Computer Applications in Science and Engineering)

    DTIC Science & Technology

    1990-02-01

    noise. Tobias B. Orloff Work began on developing a high quality rendering algorithm based on the radiosity method. The algorithm is similar to...previous progressive radiosity algorithms except for the following improvements: 1. At each iteration vertex radiosities are computed using a modified scan...line approach, thus eliminating the quadratic cost associated with a ray tracing computation of vortex radiosities . 2. At each iteration the scene is

  13. Model-free iterative control of repetitive dynamics for high-speed scanning in atomic force microscopy.

    PubMed

    Li, Yang; Bechhoefer, John

    2009-01-01

    We introduce an algorithm for calculating, offline or in real time and with no explicit system characterization, the feedforward input required for repetitive motions of a system. The algorithm is based on the secant method of numerical analysis and gives accurate motion at frequencies limited only by the signal-to-noise ratio and the actuator power and range. We illustrate the secant-solver algorithm on a stage used for atomic force microscopy.

  14. TU-F-CAMPUS-I-01: Investigation of the Effective Dose From Bolus Tracking Acquisitions at Different Anatomical Locations in the Chest for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nowik, P; Bujila, R; Merzan, D

    2015-06-15

    Purpose: Stationary table acquisitions (Bolus tracking) in X-ray Computed Tomography (CT) can Result in dose length products (DLP) comparable to spiral scans. It is today unclear whether or not the effective dose (E) for Bolus Tracking can be approximated using target region specific conversion factors (E/DLP). The purpose of this study was to investigate how E depends on the anatomical location of the Bolus Tracking in relation to Chest CT scans with the same DLP. Methods: Effective doses were approximated for the ICRP 110 adult Reference Male (AM) and adult Reference Female (FM) computational voxel phantoms using software for CTmore » dose approximations (pre-simulated MC data). The effective dose was first approximated for a Chest CT scan using spiral technique and a CTDIvol (32 cm) of 6 mGy. The effective dose from the spiral scan was then compared to E approximated for contiguous Bolus Tracking acquisitions (1 cm separation), with a total collimation of 1 cm, over different locations of the chest of the voxel phantoms. The number of rotations used for the Bolus Tracking acquisitions was adjusted to yield the same DLP (32 cm) as the spiral scan. Results: Depending on the anatomical location of the Bolus Tracking, E ranged by factors of 1.3 to 6.8 for the AM phantom and 1.4 to 3.3 for the AF phantom, compared to the effective dose of the spiral scans. The greatest E for the Bolus Tracking acquisitions was observed for anatomical locations coinciding with breast tissue. This can be expected as breast tissue has a high tissue weighting factor in the calculation of E. Conclusion: For Chest CT scans, the effective dose from Bolus Tracking is highly dependent on the anatomical location where the scan is administered and will not always accurately be represented using target region specific conversion factors.« less

  15. Improvements to the swath-level near-surface atmospheric state parameter retrievals within the NRL Ocean Surface Flux System (NFLUX)

    NASA Astrophysics Data System (ADS)

    May, J. C.; Rowley, C. D.; Meyer, H.

    2017-12-01

    The Naval Research Laboratory (NRL) Ocean Surface Flux System (NFLUX) is an end-to-end data processing and assimilation system used to provide near-real-time satellite-based surface heat flux fields over the global ocean. The first component of NFLUX produces near-real-time swath-level estimates of surface state parameters and downwelling radiative fluxes. The focus here will be on the satellite swath-level state parameter retrievals, namely surface air temperature, surface specific humidity, and surface scalar wind speed over the ocean. Swath-level state parameter retrievals are produced from satellite sensor data records (SDRs) from four passive microwave sensors onboard 10 platforms: the Special Sensor Microwave Imager/Sounder (SSMIS) sensor onboard the DMSP F16, F17, and F18 platforms; the Advanced Microwave Sounding Unit-A (AMSU-A) sensor onboard the NOAA-15, NOAA-18, NOAA-19, Metop-A, and Metop-B platforms; the Advanced Technology Microwave Sounder (ATMS) sensor onboard the S-NPP platform; and the Advanced Microwave Scannin Radiometer 2 (AMSR2) sensor onboard the GCOM-W1 platform. The satellite SDRs are translated into state parameter estimates using multiple polynomial regression algorithms. The coefficients to the algorithms are obtained using a bootstrapping technique with all available brightness temperature channels for a given sensor, in addition to a SST field. For each retrieved parameter for each sensor-platform combination, unique algorithms are developed for ascending and descending orbits, as well as clear vs cloudy conditions. Each of the sensors produces surface air temperature and surface specific humidity retrievals. The SSMIS and AMSR2 sensors also produce surface scalar wind speed retrievals. Improvement is seen in the SSMIS retrievals when separate algorithms are used for the even and odd scans, with the odd scans performing better than the even scans. Currently, NFLUX treats all SSMIS scans as even scans. Additional improvement in all of the surface retrievals comes from using a 3-hourly SST field, as opposed to a daily SST field.

  16. Determination of the position of nucleus cochlear implant electrodes in the inner ear.

    PubMed

    Skinner, M W; Ketten, D R; Vannier, M W; Gates, G A; Yoffie, R L; Kalender, W A

    1994-09-01

    Accurate determination of intracochlear electrode position in patients with cochlear implants could provide a basis for detecting migration of the implant and could aid in the selection of stimulation parameters for sound processor programming. New computer algorithms for submillimeter resolution and 3-D reconstruction from spiral computed tomographic (CT) scans now make it possible to accurately determine the position of implanted electrodes within the cochlear canal. The accuracy of these algorithms was tested using an electrode array placed in a phantom model. Measurements of electrode length and interelectrode distance from spiral CT scan reconstructions were in close agreement with those from stereo microscopy. Although apparent electrode width was increased on CT scans due to partial volume averaging, a correction factor was developed for measurements from conventional radiographs and an expanded CT absorption value scale added to detect the presence of platinum electrodes and wires. The length of the cochlear canal was calculated from preoperative spiral CT scans for one patient, and the length of insertion of the electrode array was calculated from her postoperative spiral CT scans. The cross-sectional position of electrodes in relation to the outer bony wall and modiolus was measured and plotted as a function of distance with the electrode width correction applied.

  17. On the consistency among different approaches for nuclear track scanning and data processing

    NASA Astrophysics Data System (ADS)

    Inozemtsev, K. O.; Kushin, V. V.; Kodaira, S.; Shurshakov, V. A.

    2018-04-01

    The article describes various approaches for space radiation track measurement using CR-39™ detector (Tastrak). The results of comparing different methods for track scanning and data processing are presented. Basic algorithms for determination of track parameters are described. Every approach involves individual set of measured track parameters. For two sets, track scanning is sufficient in the plane of detector surface (2-D measurement), third set requires scanning in the additional projection (3-D measurement). An experimental comparison of considered techniques was made with the use of accelerated heavy ions Ar, Fe and Kr.

  18. The Combination of RSA And Block Chiper Algorithms To Maintain Message Authentication

    NASA Astrophysics Data System (ADS)

    Yanti Tarigan, Sepri; Sartika Ginting, Dewi; Lumban Gaol, Melva; Lorensi Sitompul, Kristin

    2017-12-01

    RSA algorithm is public key algorithm using prime number and even still used today. The strength of this algorithm lies in the exponential process, and the factorial number into 2 prime numbers which until now difficult to do factoring. The RSA scheme itself adopts the block cipher scheme, where prior to encryption, the existing plaintext is divide in several block of the same length, where the plaintext and ciphertext are integers between 1 to n, where n is typically 1024 bit, and the block length itself is smaller or equal to log(n)+1 with base 2. With the combination of RSA algorithm and block chiper it is expected that the authentication of plaintext is secure. The secured message will be encrypted with RSA algorithm first and will be encrypted again using block chiper. And conversely, the chipertext will be decrypted with the block chiper first and decrypted again with the RSA algorithm. This paper suggests a combination of RSA algorithms and block chiper to secure data.

  19. Skylab S-191 spectrometer single spectral scan analysis program. [user manual

    NASA Technical Reports Server (NTRS)

    Downes, E. L.

    1974-01-01

    Documentation and user information for the S-191 single spectral scan analysis program are reported. A breakdown of the computational algorithms is supplied, followed by the program listing and examples of sample output. A copy of the flow chart which describes the driver routine in the body of the main program segment is included.

  20. New Algorithm to Enable Construction and Display of 3D Structures from Scanning Probe Microscopy Images Acquired Layer-by-Layer.

    PubMed

    Deng, William Nanqiao; Wang, Shuo; Ventrici de Souza, Joao; Kuhl, Tonya L; Liu, Gang-Yu

    2018-06-25

    Scanning probe microscopy (SPM), such as atomic force microscopy (AFM), is widely known for high-resolution imaging of surface structures and nanolithography in two dimensions (2D), providing important physical insights into surface science and material science. This work reports a new algorithm to enable construction and display of layer-by-layer 3D structures from SPM images. The algorithm enables alignment of SPM images acquired during layer-by-layer deposition and removal of redundant features and faithfully constructs the deposited 3D structures. The display uses a "see-through" strategy to enable the structure of each layer to be visible. The results demonstrate high spatial accuracy as well as algorithm versatility; users can set parameters for reconstruction and display as per image quality and research needs. To the best of our knowledge, this method represents the first report to enable SPM technology for 3D imaging construction and display. The detailed algorithm is provided to facilitate usage of the same approach in any SPM software. These new capabilities support wide applications of SPM that require 3D image reconstruction and display, such as 3D nanoprinting and 3D additive and subtractive manufacturing and imaging.

  1. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  2. Quantification of choroidal neovascularization vessel length using optical coherence tomography angiography

    NASA Astrophysics Data System (ADS)

    Gao, Simon S.; Liu, Li; Bailey, Steven T.; Flaxel, Christina J.; Huang, David; Li, Dengwang; Jia, Yali

    2016-07-01

    Quantification of choroidal neovascularization (CNV) as visualized by optical coherence tomography angiography (OCTA) may have importance clinically when diagnosing or tracking disease. Here, we present an automated algorithm to quantify the vessel skeleton of CNV as vessel length. Initial segmentation of the CNV on en face angiograms was achieved using saliency-based detection and thresholding. A level set method was then used to refine vessel edges. Finally, a skeleton algorithm was applied to identify vessel centerlines. The algorithm was tested on nine OCTA scans from participants with CNV and comparisons of the algorithm's output to manual delineation showed good agreement.

  3. Improved artificial bee colony algorithm for vehicle routing problem with time windows

    PubMed Central

    Yan, Qianqian; Zhang, Mengjie; Yang, Yunong

    2017-01-01

    This paper investigates a well-known complex combinatorial problem known as the vehicle routing problem with time windows (VRPTW). Unlike the standard vehicle routing problem, each customer in the VRPTW is served within a given time constraint. This paper solves the VRPTW using an improved artificial bee colony (IABC) algorithm. The performance of this algorithm is improved by a local optimization based on a crossover operation and a scanning strategy. Finally, the effectiveness of the IABC is evaluated on some well-known benchmarks. The results demonstrate the power of IABC algorithm in solving the VRPTW. PMID:28961252

  4. Automatic image analysis and spot classification for detection of pathogenic Escherichia coli on glass slide DNA microarrays

    USDA-ARS?s Scientific Manuscript database

    A computer algorithm was created to inspect scanned images from DNA microarray slides developed to rapidly detect and genotype E. Coli O157 virulent strains. The algorithm computes centroid locations for signal and background pixels in RGB space and defines a plane perpendicular to the line connect...

  5. Synthetic Incoherence via Scanned Gaussian Beams

    PubMed Central

    Levine, Zachary H.

    2006-01-01

    Tomography, in most formulations, requires an incoherent signal. For a conventional transmission electron microscope, the coherence of the beam often results in diffraction effects that limit the ability to perform a 3D reconstruction from a tilt series with conventional tomographic reconstruction algorithms. In this paper, an analytic solution is given to a scanned Gaussian beam, which reduces the beam coherence to be effectively incoherent for medium-size (of order 100 voxels thick) tomographic applications. The scanned Gaussian beam leads to more incoherence than hollow-cone illumination. PMID:27274945

  6. CT cardiac imaging: evolution from 2D to 3D backprojection

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Pan, Tinsu; Sasaki, Kosuke

    2004-04-01

    The state-of-the-art multiple detector-row CT, which usually employs fan beam reconstruction algorithms by approximating a cone beam geometry into a fan beam geometry, has been well recognized as an important modality for cardiac imaging. At present, the multiple detector-row CT is evolving into volumetric CT, in which cone beam reconstruction algorithms are needed to combat cone beam artifacts caused by large cone angle. An ECG-gated cardiac cone beam reconstruction algorithm based upon the so-called semi-CB geometry is implemented in this study. To get the highest temporal resolution, only the projection data corresponding to 180° plus the cone angle are row-wise rebinned into the semi-CB geometry for three-dimensional reconstruction. Data extrapolation is utilized to extend the z-coverage of the ECG-gated cardiac cone beam reconstruction algorithm approaching the edge of a CT detector. A helical body phantom is used to evaluate the ECG-gated cone beam reconstruction algorithm"s z-coverage and capability of suppressing cone beam artifacts. Furthermore, two sets of cardiac data scanned by a multiple detector-row CT scanner at 16 x 1.25 (mm) and normalized pitch 0.275 and 0.3 respectively are used to evaluate the ECG-gated CB reconstruction algorithm"s imaging performance. As a reference, the images reconstructed by a fan beam reconstruction algorithm for multiple detector-row CT are also presented. The qualitative evaluation shows that, the ECG-gated cone beam reconstruction algorithm outperforms its fan beam counterpart from the perspective of cone beam artifact suppression and z-coverage while the temporal resolution is well maintained. Consequently, the scan speed can be increased to reduce the contrast agent amount and injection time, improve the patient comfort and x-ray dose efficiency. Based up on the comparison, it is believed that, with the transition of multiple detector-row CT into volumetric CT, ECG-gated cone beam reconstruction algorithms will provide better image quality for CT cardiac applications.

  7. Automated circumferential construction of first-order aqueous humor outflow pathways using spectral-domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Alex S.; Belghith, Akram; Dastiridou, Anna; Chopra, Vikas; Zangwill, Linda M.; Weinreb, Robert N.

    2017-06-01

    The purpose was to create a three-dimensional (3-D) model of circumferential aqueous humor outflow (AHO) in a living human eye with an automated detection algorithm for Schlemm's canal (SC) and first-order collector channels (CC) applied to spectral-domain optical coherence tomography (SD-OCT). Anterior segment SD-OCT scans from a subject were acquired circumferentially around the limbus. A Bayesian Ridge method was used to approximate the location of the SC on infrared confocal laser scanning ophthalmoscopic images with a cross multiplication tool developed to initiate SC/CC detection automated through a fuzzy hidden Markov Chain approach. Automatic segmentation of SC and initial CC's was manually confirmed by two masked graders. Outflow pathways detected by the segmentation algorithm were reconstructed into a 3-D representation of AHO. Overall, only <1% of images (5114 total B-scans) were ungradable. Automatic segmentation algorithm performed well with SC detection 98.3% of the time and <0.1% false positive detection compared to expert grader consensus. CC was detected 84.2% of the time with 1.4% false positive detection. 3-D representation of AHO pathways demonstrated variably thicker and thinner SC with some clear CC roots. Circumferential (360 deg), automated, and validated AHO detection of angle structures in the living human eye with reconstruction was possible.

  8. Reproducibility of radiomics for deciphering tumor phenotype with imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Binsheng; Tan, Yongqiang; Tsai, Wei-Yann; Qi, Jing; Xie, Chuanmiao; Lu, Lin; Schwartz, Lawrence H.

    2016-03-01

    Radiomics (radiogenomics) characterizes tumor phenotypes based on quantitative image features derived from routine radiologic imaging to improve cancer diagnosis, prognosis, prediction and response to therapy. Although radiomic features must be reproducible to qualify as biomarkers for clinical care, little is known about how routine imaging acquisition techniques/parameters affect reproducibility. To begin to fill this knowledge gap, we assessed the reproducibility of a comprehensive, commonly-used set of radiomic features using a unique, same-day repeat computed tomography data set from lung cancer patients. Each scan was reconstructed at 6 imaging settings, varying slice thicknesses (1.25 mm, 2.5 mm and 5 mm) and reconstruction algorithms (sharp, smooth). Reproducibility was assessed using the repeat scans reconstructed at identical imaging setting (6 settings in total). In separate analyses, we explored differences in radiomic features due to different imaging parameters by assessing the agreement of these radiomic features extracted from the repeat scans reconstructed at the same slice thickness but different algorithms (3 settings in total). Our data suggest that radiomic features are reproducible over a wide range of imaging settings. However, smooth and sharp reconstruction algorithms should not be used interchangeably. These findings will raise awareness of the importance of properly setting imaging acquisition parameters in radiomics/radiogenomics research.

  9. Fast local reconstruction by selective backprojection for low dose in dental computed tomography

    NASA Astrophysics Data System (ADS)

    Yan, Bin; Deng, Lin; Han, Yu; Zhang, Feng; Wang, Xian-Chao; Li, Lei

    2014-10-01

    The high radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer, which becomes a major clinical concern. The backprojection-filtration (BPF) algorithm could reduce the radiation dose by reconstructing the images from truncated data in a short scan. In a dental CT, it could reduce the radiation dose for the teeth by using the projection acquired in a short scan, and could avoid irradiation to the other part by using truncated projection. However, the limit of integration for backprojection varies per PI-line, resulting in low calculation efficiency and poor parallel performance. Recently, a tent BPF has been proposed to improve the calculation efficiency by rearranging the projection. However, the memory-consuming data rebinning process is included. Accordingly, the selective BPF (S-BPF) algorithm is proposed in this paper. In this algorithm, the derivative of the projection is backprojected to the points whose x coordinate is less than that of the source focal spot to obtain the differentiated backprojection. The finite Hilbert inverse is then applied to each PI-line segment. S-BPF avoids the influence of the variable limit of integration by selective backprojection without additional time cost or memory cost. The simulation experiment and the real experiment demonstrated the higher reconstruction efficiency of S-BPF.

  10. Comparison of x ray computed tomography number to proton relative linear stopping power conversion functions using a standard phantom.

    PubMed

    Moyers, M F

    2014-06-01

    Adequate evaluation of the results from multi-institutional trials involving light ion beam treatments requires consideration of the planning margins applied to both targets and organs at risk. A major uncertainty that affects the size of these margins is the conversion of x ray computed tomography numbers (XCTNs) to relative linear stopping powers (RLSPs). Various facilities engaged in multi-institutional clinical trials involving proton beams have been applying significantly different margins in their patient planning. This study was performed to determine the variance in the conversion functions used at proton facilities in the U.S.A. wishing to participate in National Cancer Institute sponsored clinical trials. A simplified method of determining the conversion function was developed using a standard phantom containing only water and aluminum. The new method was based on the premise that all scanners have their XCTNs for air and water calibrated daily to constant values but that the XCTNs for high density/high atomic number materials are variable with different scanning conditions. The standard phantom was taken to 10 different proton facilities and scanned with the local protocols resulting in 14 derived conversion functions which were compared to the conversion functions used at the local facilities. For tissues within ±300 XCTN of water, all facility functions produced converted RLSP values within ±6% of the values produced by the standard function and within 8% of the values from any other facility's function. For XCTNs corresponding to lung tissue, converted RLSP values differed by as great as ±8% from the standard and up to 16% from the values of other facilities. For XCTNs corresponding to low-density immobilization foam, the maximum to minimum values differed by as much as 40%. The new method greatly simplifies determination of the conversion function, reduces ambiguity, and in the future could promote standardization between facilities. Although it was not possible from these experiments to determine which conversion function is most appropriate, the variation between facilities suggests that the margins used in some facilities to account for the uncertainty in converting XCTNs to RLSPs may be too small.

  11. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  12. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm concerns include the decision of which algorithms to implement as well as the problem of optimal setting of adjustable parameters. It will take imaging vendors several years to work through these challenges and provide solutions for a wide range of applications.

  13. Lead-free inverted planar formamidinium tin triiodide perovskite solar cells achieving power conversion efficiencies up to 6.22%

    DOE PAGES

    Liao, Weiqiang; Zhao, Dewei; Yu, Yue; ...

    2016-08-29

    Efficient lead (Pb)-free inverted planar formamidinium tin triiodide (FASnI 3) perovskite solar cells (PVSCs) are demonstrated. Our FASnI 3 PVSCs achieved average power conversion efficiencies (PCEs) of 5.41% ± 0.46% and a maximum PCE of 6.22% under forward voltage scan. Here, the PVSCs exhibit small photocurrent–voltage hysteresis and high reproducibility. The champion cell shows a steady-state efficiency of ≈6.00% for over 100 s.

  14. Predictive Factors for Visual Field Conversion: Comparison of Scanning Laser Polarimetry and Optical Coherence Tomography.

    PubMed

    Diekmann, Theresa; Schrems-Hoesl, Laura M; Mardin, Christian Y; Laemmer, Robert; Horn, Folkert K; Kruse, Friedrich E; Schrems, Wolfgang A

    2018-02-01

    The purpose of this study was to compare the ability of scanning laser polarimetry (SLP) and spectral-domain optical coherence tomography (SD-OCT) to predict future visual field conversion of subjects with ocular hypertension and early glaucoma. All patients were recruited from the Erlangen glaucoma registry and examined using standard automated perimetry, 24-hour intraocular pressure profile, and optic disc photography. Peripapillary retinal nerve fiber layer thickness (RNFL) measurements were obtained by SLP (GDx-VCC) and SD-OCT (Spectralis OCT). Positive and negative predictive values (PPV, NPV) were calculated for morphologic parameters of SLP and SD-OCT. Kaplan-Meier survival curves were plotted and log-rank tests were performed to compare the survival distributions. Contingency tables and Venn-diagrams were calculated to compare the predictive ability. The study included 207 patients-75 with ocular hypertension, 85 with early glaucoma, and 47 controls. Median follow-up was 4.5 years. A total of 29 patients (14.0%) developed visual field conversion during follow-up. SLP temporal-inferior RNFL [0.667; 95% confidence interval (CI), 0.281-0.935] and SD-OCT temporal-inferior RNFL (0.571; 95% CI, 0.317-0.802) achieved the highest PPV; nerve fiber indicator (0.923; 95% CI, 0.876-0.957) and SD-OCT mean (0.898; 95% CI, 0.847-0.937) achieved the highest NPV of all investigated parameters. The Kaplan-Meier curves confirmed significantly higher survival for subjects within normal limits of measurements of both devices (P<0.001). Venn diagrams tested with McNemar test statistics showed no significant difference for PPV (P=0.219) or NPV (P=0.678). Both GDx-VCC and SD-OCT demonstrate comparable results in predicting future visual field conversion if taking typical scans for GDx-VCC. In addition, the likelihood ratios suggest that GDx-VCC's nerve fiber indicator<30 may be the most useful parameter to confirm future nonconversion. (http://www.ClinicalTrials.gov number, NTC00494923; Erlangen Glaucoma Registry).

  15. Accelerated defect visualization of microelectronic systems using binary search with fixed pitch-catch distance laser ultrasonic scanning

    NASA Astrophysics Data System (ADS)

    Park, Byeongjin; Sohn, Hoon

    2018-04-01

    The practicality of laser ultrasonic scanning is limited because scanning at a high spatial resolution demands a prohibitively long scanning time. Inspired by binary search, an accelerated defect visualization technique is developed to visualize defect with a reduced scanning time. The pitch-catch distance between the excitation point and the sensing point is also fixed during scanning to maintain a high signal-to-noise ratio of measured ultrasonic responses. The approximate defect boundary is identified by examining the interactions between ultrasonic waves and defect observed at the scanning points that are sparsely selected by a binary search algorithm. Here, a time-domain laser ultrasonic response is transformed into a spatial ultrasonic domain response using a basis pursuit approach so that the interactions between ultrasonic waves and defect can be better identified in the spatial ultrasonic domain. Then, the area inside the identified defect boundary is visualized as defect. The performance of the proposed defect visualization technique is validated through an experiment on a semiconductor chip. The proposed defect visualization technique accelerates the defect visualization process in three aspects: (1) The number of measurements that is necessary for defect visualization is dramatically reduced by a binary search algorithm; (2) The number of averaging that is necessary to achieve a high signal-to-noise ratio is reduced by maintaining the wave propagation distance short; and (3) With the proposed technique, defect can be identified with a lower spatial resolution than the spatial resolution required by full-field wave propagation imaging.

  16. Apparatus for controlling the scan width of a scanning laser beam

    DOEpatents

    Johnson, Gary W.

    1996-01-01

    Swept-wavelength lasers are often used in absorption spectroscopy applications. In experiments where high accuracy is required, it is desirable to continuously monitor and control the range of wavelengths scanned (the scan width). A system has been demonstrated whereby the scan width of a swept ring-dye laser, or semiconductor diode laser, can be measured and controlled in real-time with a resolution better than 0.1%. Scan linearity, or conformity to a nonlinear scan waveform, can be measured and controlled. The system of the invention consists of a Fabry-Perot interferometer, three CAMAC interface modules, and a microcomputer running a simple analysis and proportional-integral control algorithm. With additional modules, multiple lasers can be simultaneously controlled. The invention also includes an embodiment implemented on an ordinary PC with a multifunction plug-in board.

  17. Apparatus for controlling the scan width of a scanning laser beam

    DOEpatents

    Johnson, G.W.

    1996-10-22

    Swept-wavelength lasers are often used in absorption spectroscopy applications. In experiments where high accuracy is required, it is desirable to continuously monitor and control the range of wavelengths scanned (the scan width). A system has been demonstrated whereby the scan width of a swept ring-dye laser, or semiconductor diode laser, can be measured and controlled in real-time with a resolution better than 0.1%. Scan linearity, or conformity to a nonlinear scan waveform, can be measured and controlled. The system of the invention consists of a Fabry-Perot interferometer, three CAMAC interface modules, and a microcomputer running a simple analysis and proportional-integral control algorithm. With additional modules, multiple lasers can be simultaneously controlled. The invention also includes an embodiment implemented on an ordinary PC with a multifunction plug-in board. 8 figs.

  18. Assay development and case history of a 32K-biased library high-content MK2-EGFP translocation screen to identify p38 mitogen-activated protein kinase inhibitors on the ArrayScan 3.1 imaging platform.

    PubMed

    Trask, Oscar J; Baker, Audrey; Williams, Rhonda Gates; Nickischer, Debra; Kandasamy, Ramani; Laethem, Carmen; Johnston, Patricia A; Johnston, Paul A

    2006-01-01

    This chapter describes the conversion and assay development of a 96-well MK2-EGFP translocation assay into a higher density 384-well format high-content assay to be screened on the ArrayScan 3.1 imaging platform. The assay takes advantage of the well-substantiated hypothesis that mitogen-activated protein kinase-activating protein kinase-2 (MK2) is a substrate of p38 MAPK kinase and that p38-induced phosphorylation of MK-2 induces a nucleus-to-cytoplasm translocation. This chapter also presents a case history of the performance of the MK2-EGFP translocation assay, run as a "high-content" screen of a 32K kinase-biased library to identify p38 inhibitors. The assay performed very well and a number of putative p38 inhibitor hits were identified. Through the use of multiparameter data provided by the nuclear translocation algorithm and by checking images, a number of compounds were identified that were potential artifacts due to interference with the imaging format. These included fluorescent compounds, or compounds that dramatically reduced cell numbers due to cytotoxicity or by disrupting cell adherence. A total of 145 compounds produced IC(50) values <50.0 muM in the MK2-EGFP translocation assay, and a cross target query of the Lilly-RTP HTS database confirmed their inhibitory activity against in vitro kinase targets, including p38a. Compounds were confirmed structurally by LCMS analysis and profiled in cell-based imaging assays for MAPK signaling pathway selectivity. Three of the hit scaffolds identified in the MK2-EGFP translocation HCS run on the ArrayScan were selected for a p38a inhibitor hit-to-lead structure activity relationship (SAR) chemistry effort.

  19. Change detection of medical images using dictionary learning techniques and PCA

    NASA Astrophysics Data System (ADS)

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-03-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of MRI scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. In this paper we present the Eigen-Block Change Detection algorithm (EigenBlockCD). It performs local registration and identifies the changes between consecutive MR images of the brain. Blocks of pixels from baseline scan are used to train local dictionaries that are then used to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between L1 and L2 norms as two possible similarity measures in the EigenBlockCD. We show the advantages of L2 norm over L1 norm theoretically and numerically. We also demonstrate the performance of the EigenBlockCD algorithm for detecting changes of MR images and compare our results with those provided in recent literature. Experimental results with both simulated and real MRI scans show that the EigenBlockCD outperforms the previous methods. It detects clinical changes while ignoring the changes due to patient's position and other acquisition artifacts.

  20. ScanImage: flexible software for operating laser scanning microscopes.

    PubMed

    Pologruto, Thomas A; Sabatini, Bernardo L; Svoboda, Karel

    2003-05-17

    Laser scanning microscopy is a powerful tool for analyzing the structure and function of biological specimens. Although numerous commercial laser scanning microscopes exist, some of the more interesting and challenging applications demand custom design. A major impediment to custom design is the difficulty of building custom data acquisition hardware and writing the complex software required to run the laser scanning microscope. We describe a simple, software-based approach to operating a laser scanning microscope without the need for custom data acquisition hardware. Data acquisition and control of laser scanning are achieved through standard data acquisition boards. The entire burden of signal integration and image processing is placed on the CPU of the computer. We quantitate the effectiveness of our data acquisition and signal conditioning algorithm under a variety of conditions. We implement our approach in an open source software package (ScanImage) and describe its functionality. We present ScanImage, software to run a flexible laser scanning microscope that allows easy custom design.

  1. Performance assessment of methods for estimation of fractal dimension from scanning electron microscope images.

    PubMed

    Risović, Dubravko; Pavlović, Zivko

    2013-01-01

    Processing of gray scale images in order to determine the corresponding fractal dimension is very important due to widespread use of imaging technologies and application of fractal analysis in many areas of science, technology, and medicine. To this end, many methods for estimation of fractal dimension from gray scale images have been developed and routinely used. Unfortunately different methods (dimension estimators) often yield significantly different results in a manner that makes interpretation difficult. Here, we report results of comparative assessment of performance of several most frequently used algorithms/methods for estimation of fractal dimension. To that purpose, we have used scanning electron microscope images of aluminum oxide surfaces with different fractal dimensions. The performance of algorithms/methods was evaluated using the statistical Z-score approach. The differences between performances of six various methods are discussed and further compared with results obtained by electrochemical impedance spectroscopy on the same samples. The analysis of results shows that the performance of investigated algorithms varies considerably and that systematically erroneous fractal dimensions could be estimated using certain methods. The differential cube counting, triangulation, and box counting algorithms showed satisfactory performance in the whole investigated range of fractal dimensions. Difference statistic is proved to be less reliable generating 4% of unsatisfactory results. The performances of the Power spectrum, Partitioning and EIS were unsatisfactory in 29%, 38%, and 75% of estimations, respectively. The results of this study should be useful and provide guidelines to researchers using/attempting fractal analysis of images obtained by scanning microscopy or atomic force microscopy. © Wiley Periodicals, Inc.

  2. Learning-based scan plane identification from fetal head ultrasound images

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoming; Annangi, Pavan; Gupta, Mithun; Yu, Bing; Padfield, Dirk; Banerjee, Jyotirmoy; Krishnan, Kajoli

    2012-03-01

    Acquisition of a clinically acceptable scan plane is a pre-requisite for ultrasonic measurement of anatomical features from B-mode images. In obstetric ultrasound, measurement of gestational age predictors, such as biparietal diameter and head circumference, is performed at the level of the thalami and cavum septum pelucidi. In an accurate scan plane, the head can be modeled as an ellipse, the thalami looks like a butterfly, the cavum appears like an empty box and the falx is a straight line along the major axis of a symmetric ellipse inclined either parallel to or at small angles to the probe surface. Arriving at the correct probe placement on the mother's belly to obtain an accurate scan plane is a task of considerable challenge especially for a new user of ultrasound. In this work, we present a novel automated learning-based algorithm to identify an acceptable fetal head scan plane. We divide the problem into cranium detection and a template matching to capture the composite "butterfly" structure present inside the head, which mimics the visual cues used by an expert. The algorithm uses the stateof- the-art Active Appearance Models techniques from the image processing and computer vision literature and tie them to presence or absence of the inclusions within the head to automatically compute a score to represent the goodness of a scan plane. This automated technique can be potentially used to train and aid new users of ultrasound.

  3. Automated search method for AFM and profilers

    NASA Astrophysics Data System (ADS)

    Ray, Michael; Martin, Yves C.

    2001-08-01

    A new automation software creates a search model as an initial setup and searches for a user-defined target in atomic force microscopes or stylus profilometers used in semiconductor manufacturing. The need for such automation has become critical in manufacturing lines. The new method starts with a survey map of a small area of a chip obtained from a chip-design database or an image of the area. The user interface requires a user to point to and define a precise location to be measured, and to select a macro function for an application such as line width or contact hole. The search algorithm automatically constructs a range of possible scan sequences within the survey, and provides increased speed and functionality compared to the methods used in instruments to date. Each sequence consists in a starting point relative to the target, a scan direction, and a scan length. The search algorithm stops when the location of a target is found and criteria for certainty in positioning is met. With today's capability in high speed processing and signal control, the tool can simultaneously scan and search for a target in a robotic and continuous manner. Examples are given that illustrate the key concepts.

  4. Applied Swarm-based medicine: collecting decision trees for patterns of algorithms analysis.

    PubMed

    Panje, Cédric M; Glatzer, Markus; von Rappard, Joscha; Rothermundt, Christian; Hundsberger, Thomas; Zumstein, Valentin; Plasswilm, Ludwig; Putora, Paul Martin

    2017-08-16

    The objective consensus methodology has recently been applied in consensus finding in several studies on medical decision-making among clinical experts or guidelines. The main advantages of this method are an automated analysis and comparison of treatment algorithms of the participating centers which can be performed anonymously. Based on the experience from completed consensus analyses, the main steps for the successful implementation of the objective consensus methodology were identified and discussed among the main investigators. The following steps for the successful collection and conversion of decision trees were identified and defined in detail: problem definition, population selection, draft input collection, tree conversion, criteria adaptation, problem re-evaluation, results distribution and refinement, tree finalisation, and analysis. This manuscript provides information on the main steps for successful collection of decision trees and summarizes important aspects at each point of the analysis.

  5. Hierarchical planning for a surface mounting machine placement.

    PubMed

    Zeng, You-jiao; Ma, Deng-ze; Jin, Ye; Yan, Jun-qi

    2004-11-01

    For a surface mounting machine (SMM) in printed circuit board (PCB) assembly line, there are four problems, e.g. CAD data conversion, nozzle selection, feeder assignment and placement sequence determination. A hierarchical planning for them to maximize the throughput rate of an SMM is presented here. To minimize set-up time, a CAD data conversion system was first applied that could automatically generate the data for machine placement from CAD design data files. Then an effective nozzle selection approach implemented to minimize the time of nozzle changing. And then, to minimize picking time, an algorithm for feeder assignment was used to make picking multiple components simultaneously as much as possible. Finally, in order to shorten pick-and-place time, a heuristic algorithm was used to determine optimal component placement sequence according to the decided feeder positions. Experiments were conducted on a four head SMM. The experimental results were used to analyse the assembly line performance.

  6. AMSR2 Soil Moisture Product Validation

    NASA Technical Reports Server (NTRS)

    Bindlish, R.; Jackson, T.; Cosh, M.; Koike, T.; Fuiji, X.; de Jeu, R.; Chan, S.; Asanuma, J.; Berg, A.; Bosch, D.; hide

    2017-01-01

    The Advanced Microwave Scanning Radiometer 2 (AMSR2) is part of the Global Change Observation Mission-Water (GCOM-W) mission. AMSR2 fills the void left by the loss of the Advanced Microwave Scanning Radiometer Earth Observing System (AMSR-E) after almost 10 years. Both missions provide brightness temperature observations that are used to retrieve soil moisture. Merging AMSR-E and AMSR2 will help build a consistent long-term dataset. Before tackling the integration of AMSR-E and AMSR2 it is necessary to conduct a thorough validation and assessment of the AMSR2 soil moisture products. This study focuses on validation of the AMSR2 soil moisture products by comparison with in situ reference data from a set of core validation sites. Three products that rely on different algorithms were evaluated; the JAXA Soil Moisture Algorithm (JAXA), the Land Parameter Retrieval Model (LPRM), and the Single Channel Algorithm (SCA). Results indicate that overall the SCA has the best performance based upon the metrics considered.

  7. Comparison of distribution of lung aeration measured with EIT and CT in spontaneously breathing, awake patients1.

    PubMed

    Radke, Oliver C; Schneider, Thomas; Braune, Anja; Pirracchio, Romain; Fischer, Felix; Koch, Thea

    2016-09-28

    Both Electrical Impedance Tomography (EIT) and Computed Tomography (CT) allow the estimation of the lung area. We compared two algorithms for the detection of the lung area per quadrant from the EIT images with the lung areas derived from the CT images. 39 outpatients who were scheduled for an elective CT scan of the thorax were included in the study. For each patient we recorded EIT images immediately before the CT scan. The lung area per quadrant was estimated from both CT and EIT data using two different algorithms for the EIT data. Data showed considerable variation during spontaneous breathing of the patients. Overall correlation between EIT and CT was poor (0.58-0.77), the correlation between the two EIT algorithms was better (0.90-0.92). Bland-Altmann analysis revealed absence of bias, but wide limits of agreement. Lung area estimation from CT and EIT differs significantly, most probably because of the fundamental difference in image generation.

  8. CAVIAR: CLASSIFICATION VIA AGGREGATED REGRESSION AND ITS APPLICATION IN CLASSIFYING OASIS BRAIN DATABASE

    PubMed Central

    Chen, Ting; Rangarajan, Anand; Vemuri, Baba C.

    2010-01-01

    This paper presents a novel classification via aggregated regression algorithm – dubbed CAVIAR – and its application to the OASIS MRI brain image database. The CAVIAR algorithm simultaneously combines a set of weak learners based on the assumption that the weight combination for the final strong hypothesis in CAVIAR depends on both the weak learners and the training data. A regularization scheme using the nearest neighbor method is imposed in the testing stage to avoid overfitting. A closed form solution to the cost function is derived for this algorithm. We use a novel feature – the histogram of the deformation field between the MRI brain scan and the atlas which captures the structural changes in the scan with respect to the atlas brain – and this allows us to automatically discriminate between various classes within OASIS [1] using CAVIAR. We empirically show that CAVIAR significantly increases the performance of the weak classifiers by showcasing the performance of our technique on OASIS. PMID:21151847

  9. CAVIAR: CLASSIFICATION VIA AGGREGATED REGRESSION AND ITS APPLICATION IN CLASSIFYING OASIS BRAIN DATABASE.

    PubMed

    Chen, Ting; Rangarajan, Anand; Vemuri, Baba C

    2010-04-14

    This paper presents a novel classification via aggregated regression algorithm - dubbed CAVIAR - and its application to the OASIS MRI brain image database. The CAVIAR algorithm simultaneously combines a set of weak learners based on the assumption that the weight combination for the final strong hypothesis in CAVIAR depends on both the weak learners and the training data. A regularization scheme using the nearest neighbor method is imposed in the testing stage to avoid overfitting. A closed form solution to the cost function is derived for this algorithm. We use a novel feature - the histogram of the deformation field between the MRI brain scan and the atlas which captures the structural changes in the scan with respect to the atlas brain - and this allows us to automatically discriminate between various classes within OASIS [1] using CAVIAR. We empirically show that CAVIAR significantly increases the performance of the weak classifiers by showcasing the performance of our technique on OASIS.

  10. Multi-Target Tracking via Mixed Integer Optimization

    DTIC Science & Technology

    2016-05-13

    solving these two problems separately, however few algorithms attempt to solve these simultaneously and even fewer utilize optimization. In this paper we...introduce a new mixed integer optimization (MIO) model which solves the data association and trajectory estimation problems simultaneously by minimizing...Kalman filter [5], which updates the trajectory estimates before the algorithm progresses forward to the next scan. This process repeats sequentially

  11. Automatic Mexico Gulf Oil Spill Detection from Radarsat-2 SAR Satellite Data Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Marghany, Maged

    2016-10-01

    In this work, a genetic algorithm is exploited for automatic detection of oil spills of small and large size. The route is achieved using arrays of RADARSAT-2 SAR ScanSAR Narrow single beam data obtained in the Gulf of Mexico. The study shows that genetic algorithm has automatically segmented the dark spot patches related to small and large oil spill pixels. This conclusion is confirmed by the receiveroperating characteristic (ROC) curve and ground data which have been documented. The ROC curve indicates that the existence of oil slick footprints can be identified with the area under the curve between the ROC curve and the no-discrimination line of 90%, which is greater than that of other surrounding environmental features. The small oil spill sizes represented 30% of the discriminated oil spill pixels in ROC curve. In conclusion, the genetic algorithm can be used as a tool for the automatic detection of oil spills of either small or large size and the ScanSAR Narrow single beam mode serves as an excellent sensor for oil spill patterns detection and surveying in the Gulf of Mexico.

  12. Automated hierarchical time gain compensation for in-vivo ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Hemmsen, Martin C.; Martins, Bo; Brandt, Andreas H.; Hansen, Kristoffer L.; Nielsen, Michael B.; Jensen, Jørgen A.

    2015-03-01

    Time gain compensation (TGC) is essential to ensure the optimal image quality of the clinical ultrasound scans. When large fluid collections are present within the scan plane, the attenuation distribution is changed drastically and TGC compensation becomes challenging. This paper presents an automated hierarchical TGC (AHTGC) algorithm that accurately adapts to the large attenuation variation between different types of tissues and structures. The algorithm relies on estimates of tissue attenuation, scattering strength, and noise level to gain a more quantitative understanding of the underlying tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists in terms of image quality. Wilcoxon signed-rank test was used to evaluate whether radiologists preferred the processed sequences or the unprocessed data. The results indicate that the average visual analogue scale (VAS) is positive ( p-value: 2.34 × 10-13) and estimated to be 1.01 (95% CI: 0.85; 1.16) favoring the processed data with the proposed AHTGC algorithm.

  13. [Application of rational ant colony optimization to improve the reproducibility degree of laser three-dimensional copy].

    PubMed

    Cui, Xiao-Yan; Huo, Zhong-Gang; Xin, Zhong-Hua; Tian, Xiao; Zhang, Xiao-Dong

    2013-07-01

    Three-dimensional (3D) copying of artificial ears and pistol printing are pushing laser three-dimensional copying technique to a new page. Laser three-dimensional scanning is a fresh field in laser application, and plays an irreplaceable part in three-dimensional copying. Its accuracy is the highest among all present copying techniques. Reproducibility degree marks the agreement of copied object with the original object on geometry, being the most important index property in laser three-dimensional copying technique. In the present paper, the error of laser three-dimensional copying was analyzed. The conclusion is that the data processing to the point cloud of laser scanning is the key technique to reduce the error and increase the reproducibility degree. The main innovation of this paper is as follows. On the basis of traditional ant colony optimization, rational ant colony optimization algorithm proposed by the author was applied to the laser three-dimensional copying as a new algorithm, and was put into practice. Compared with customary algorithm, rational ant colony optimization algorithm shows distinct advantages in data processing of laser three-dimensional copying, reducing the error and increasing the reproducibility degree of the copy.

  14. Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms

    NASA Astrophysics Data System (ADS)

    Samanta, A.; Todd, L. A.

    A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A comprehensive evaluation of algorithms, for the environmental application of tomography, requires the use of a battery of test concentration data before field implementation, which models reality and tests the limits of the algorithms.

  15. Automated neurovascular tracing and analysis of the knife-edge scanning microscope Rat Nissl data set using a computing cluster.

    PubMed

    Sungjun Lim; Nowak, Michael R; Yoonsuck Choe

    2016-08-01

    We present a novel, parallelizable algorithm capable of automatically reconstructing and calculating anatomical statistics of cerebral vascular networks embedded in large volumes of Rat Nissl-stained data. In this paper, we report the results of our method using Rattus somatosensory cortical data acquired using Knife-Edge Scanning Microscopy. Our algorithm performs the reconstruction task with averaged precision, recall, and F2-score of 0.978, 0.892, and 0.902 respectively. Calculated anatomical statistics show some conformance to values previously reported. The results that can be obtained from our method are expected to help explicate the relationship between the structural organization of the microcirculation and normal (and abnormal) cerebral functioning.

  16. Modeling and minimizing interference from corneal birefringence in retinal birefringence scanning for foveal fixation detection

    PubMed Central

    Irsch, Kristina; Gramatikov, Boris; Wu, Yi-Kai; Guyton, David

    2011-01-01

    Utilizing the measured corneal birefringence from a data set of 150 eyes of 75 human subjects, an algorithm and related computer program, based on Müller-Stokes matrix calculus, were developed in MATLAB for assessing the influence of corneal birefringence on retinal birefringence scanning (RBS) and for converging upon an optical/mechanical design using wave plates (“wave-plate-enhanced RBS”) that allows foveal fixation detection essentially independently of corneal birefringence. The RBS computer model, and in particular the optimization algorithm, were verified with experimental human data using an available monocular RBS-based eye fixation monitor. Fixation detection using wave-plate-enhanced RBS is adaptable to less cooperative subjects, including young children at risk for developing amblyopia. PMID:21750772

  17. Three-dimensional monochromatic x-ray computed tomography using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Saito, Tsuneo; Kudo, Hiroyuki; Takeda, Tohoru; Itai, Yuji; Tokumori, Kenji; Toyofuku, Fukai; Hyodo, Kazuyuki; Ando, Masami; Nishimura, Katsuyuki; Uyama, Chikao

    1998-08-01

    We describe a technique of 3D computed tomography (3D CT) using monochromatic x rays generated by synchrotron radiation, which performs a direct reconstruction of a 3D volume image of an object from its cone-beam projections. For the development, we propose a practical scanning orbit of the x-ray source to obtain complete 3D information on an object, and its corresponding 3D image reconstruction algorithm. The validity and usefulness of the proposed scanning orbit and reconstruction algorithm were confirmed by computer simulation studies. Based on these investigations, we have developed a prototype 3D monochromatic x-ray CT using synchrotron radiation, which provides exact 3D reconstruction and material-selective imaging by using the K-edge energy subtraction technique.

  18. Automation of film densitometry for application in personal monitoring.

    PubMed

    Taheri, M; Movafeghi, A; Rastkhah, N

    2011-03-01

    In this research work, a semi-automatic densitometry system has been developed for large-scale monitoring services by use of film badge dosemeters. The system consists of a charge-coupled device (CCD)-based scanner that can scan optical densities (ODs) up to 4.2, a computer vision algorithm to improve the quality of digitised films and an analyser program to calculate the necessary information, e.g. the mean OD of region of interest and radiation doses. For calibration of the system, two reference films were used. The Microtek scanner International Color Consortium (ICC) profiler is applied for determining the colour attributes of the scanner accurately and a reference of the density step tablet, Bundesanstalt für Materialforschung und-prüfung (BAM) is used for calibrating the automatic conversion of gray-level values to OD values in the range of 0.2-4.0 OD. The system contributes to achieve more objectives and reliable results. So by applying this system, we can digitise a set of 20 films at once and calculate their relative doses less than about 4 min, and meanwhile it causes to avoid disadvantages of manual process and to enhance the accuracy of dosimetry.

  19. Sparse-view proton computed tomography using modulated proton beams.

    PubMed

    Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong

    2015-02-01

    Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.

  20. [Diagnosis of septic loosening of hip prosthesis with LeukoScan. SPECT scan with 99mTc-labeled monoclonal antibodies].

    PubMed

    Kaisidis, A; Megas, P; Apostolopoulos, D; Spiridonidis, T; Koumoundourou, D; Zouboulis, P; Lambiris, E; Vassilakos, P

    2005-05-01

    Diagnosis of septic loosening of hip endoprosthesis with antigranulocyte scintigraphy (AGS) was analysed. Twenty-one hip prostheses were studied using laboratory tests and, in cases of elevated values, three-phase bone scan (BS) and AGS. Elective SPECT/CT scans were performed. Histologic and microbiologic exams verified the diagnosis. The AGS analysis revealed sensitivity, specificity and accuracy of value 1, while positive and negative predictive values were also 1. BS showed sensitivity of 1 and specificity of 0.33. In three cases, SPECT/CT scans corroborated the AGS interpretation. This diagnostic algorithm proved effective in the detection of septic loosening of hip prostheses. AGS can be avoided without risk of infection being overlooked.

  1. Supercontinuum optimization for dual-soliton based light sources using genetic algorithms in a grid platform.

    PubMed

    Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A

    2014-09-22

    We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.

  2. A new fast and fully automated software based algorithm for extracting respiratory signal from raw PET data and its comparison to other methods.

    PubMed

    Kesner, Adam Leon; Kuntner, Claudia

    2010-10-01

    Respiratory gating in PET is an approach used to minimize the negative effects of respiratory motion on spatial resolution. It is based on an initial determination of a patient's respiratory movements during a scan, typically using hardware based systems. In recent years, several fully automated databased algorithms have been presented for extracting a respiratory signal directly from PET data, providing a very practical strategy for implementing gating in the clinic. In this work, a new method is presented for extracting a respiratory signal from raw PET sinogram data and compared to previously presented automated techniques. The acquisition of respiratory signal from PET data in the newly proposed method is based on rebinning the sinogram data into smaller data structures and then analyzing the time activity behavior in the elements of these structures. From this analysis, a 1D respiratory trace is produced, analogous to a hardware derived respiratory trace. To assess the accuracy of this fully automated method, respiratory signal was extracted from a collection of 22 clinical FDG-PET scans using this method, and compared to signal derived from several other software based methods as well as a signal derived from a hardware system. The method presented required approximately 9 min of processing time for each 10 min scan (using a single 2.67 GHz processor), which in theory can be accomplished while the scan is being acquired and therefore allowing a real-time respiratory signal acquisition. Using the mean correlation between the software based and hardware based respiratory traces, the optimal parameters were determined for the presented algorithm. The mean/median/range of correlations for the set of scans when using the optimal parameters was found to be 0.58/0.68/0.07-0.86. The speed of this method was within the range of real-time while the accuracy surpassed the most accurate of the previously presented algorithms. PET data inherently contains information about patient motion; information that is not currently being utilized. We have shown that a respiratory signal can be extracted from raw PET data in potentially real-time and in a fully automated manner. This signal correlates well with hardware based signal for a large percentage of scans, and avoids the efforts and complications associated with hardware. The proposed method to extract a respiratory signal can be implemented on existing scanners and, if properly integrated, can be applied without changes to routine clinical procedures.

  3. Conversion of NO with a catalytic packed-bed dielectric barrier discharge reactor

    NASA Astrophysics Data System (ADS)

    Xu, CAO; Weixuan, ZHAO; Renxi, ZHANG; Huiqi, HOU; Shanping, CHEN; Ruina, ZHANG

    2017-11-01

    This paper discusses the conversion of nitric oxide (NO) with a low-temperature plasma induced by a catalytic packed-bed dielectric barrier discharge (DBD) reactor. Alumina oxide (Al2O3), glass (SiO2) and zirconium oxide (ZrO2), three different spherical packed materials of the same size, were each present in the DBD reactor. The NO conversion under varying input voltage and specific energy density, and the effects of catalysts (titanium dioxide (TiO2) and manganese oxide (MnO x ) coated on Al2O3) on NO conversion were investigated. The experimental results showed that NO conversion was greatly enhanced in the presence of packed materials in the reactor, and the catalytic packed bed of MnO x /Al2O3 showed better performance than that of TiO2/Al2O3. The surface and crystal structures of the materials and catalysts were characterized through scanning electron microscopy analysis. The final products were clearly observed by a Fourier transform infrared spectrometer and provided a better understanding of NO conversion.

  4. Scan-based volume animation driven by locally adaptive articulated registrations.

    PubMed

    Rhee, Taehyun; Lewis, J P; Neumann, Ulrich; Nayak, Krishna S

    2011-03-01

    This paper describes a complete system to create anatomically accurate example-based volume deformation and animation of articulated body regions, starting from multiple in vivo volume scans of a specific individual. In order to solve the correspondence problem across volume scans, a template volume is registered to each sample. The wide range of pose variations is first approximated by volume blend deformation (VBD), providing proper initialization of the articulated subject in different poses. A novel registration method is presented to efficiently reduce the computation cost while avoiding strong local minima inherent in complex articulated body volume registration. The algorithm highly constrains the degrees of freedom and search space involved in the nonlinear optimization, using hierarchical volume structures and locally constrained deformation based on the biharmonic clamped spline. Our registration step establishes a correspondence across scans, allowing a data-driven deformation approach in the volume domain. The results provide an occlusion-free person-specific 3D human body model, asymptotically accurate inner tissue deformations, and realistic volume animation of articulated movements driven by standard joint control estimated from the actual skeleton. Our approach also addresses the practical issues arising in using scans from living subjects. The robustness of our algorithms is tested by their applications on the hand, probably the most complex articulated region in the body, and the knee, a frequent subject area for medical imaging due to injuries. © 2011 IEEE

  5. A versatile chemical conversion synthesis of Cu2S nanotubes and the photovoltaic activities for dye-sensitized solar cell

    PubMed Central

    2014-01-01

    A versatile, low-temperature, and low-cost chemical conversion synthesis has been developed to prepare copper sulfide (Cu2S) nanotubes. The successful chemical conversion from ZnS nanotubes to Cu2S ones profits by the large difference in solubility between ZnS and Cu2S. The morphology, structure, and composition of the yielded products have been examined by field-emission scanning electron microscopy, transmission electron microscopy, and X-ray diffraction measurements. We have further successfully employed the obtained Cu2S nanotubes as counter electrodes in dye-sensitized solar cells. The light-to-electricity conversion results show that the Cu2S nanostructures exhibit high photovoltaic conversion efficiency due to the increased surface area and the good electrocatalytical activity of Cu2S. The present chemical route provides a simple way to synthesize Cu2S nanotubes with a high surface area for nanodevice applications. PMID:25246878

  6. Mode conversion between Alfven wave eigenmodes in axially inhomogeneous two-ion-species plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, D.R.; Hershkowitz, N.; Tataronis, J.A.

    The uniform cylindrical plasma model of Litwin and Hershkowitz (Phys. Fluids {bold 30}, 1323 (1987)) is shown to predict mode conversion between the lowest radial order {ital m}=+1 fast magnetosonic surface and slow ion-cyclotron global eigenmodes of the Alfven wave at the light-ion species Alfven resonance of a cold two-ion plasma. A hydrogen ({ital h})--deuterium ({ital d}) plasma is examined in experiments. The fast mode is efficiently excited by a rotating field antenna array at {omega}{similar to}{Omega}{sub {ital h}} in the central cell of the Phaedrus-B tandem mirror (Phys. Rev. Lett. {bold 51}, 1955(1983)). Radially scanned magnetic probes observe themore » propagating eigenmode wave fields within a shallow central cell magnetic gradient in which the conversion zone is axially localized according to {ital n}{sub {ital d}}/{ital n}{sub {ital h}}. A low radial-order slow ion-cyclotron mode, observed in the vicinity of the conversion zone, gives evidence for the predicted mode conversion.« less

  7. A stochastic framework for spot-scanning particle therapy.

    PubMed

    Robini, Marc; Yuemin Zhu; Wanyu Liu; Magnin, Isabelle

    2016-08-01

    In spot-scanning particle therapy, inverse treatment planning is usually limited to finding the optimal beam fluences given the beam trajectories and energies. We address the much more challenging problem of jointly optimizing the beam fluences, trajectories and energies. For this purpose, we design a simulated annealing algorithm with an exploration mechanism that balances the conflicting demands of a small mixing time at high temperatures and a reasonable acceptance rate at low temperatures. Numerical experiments substantiate the relevance of our approach and open new horizons to spot-scanning particle therapy.

  8. Pattern Discovery and Change Detection of Online Music Query Streams

    NASA Astrophysics Data System (ADS)

    Li, Hua-Fu

    In this paper, an efficient stream mining algorithm, called FTP-stream (Frequent Temporal Pattern mining of streams), is proposed to find the frequent temporal patterns over melody sequence streams. In the framework of our proposed algorithm, an effective bit-sequence representation is used to reduce the time and memory needed to slide the windows. The FTP-stream algorithm can calculate the support threshold in only a single pass based on the concept of bit-sequence representation. It takes the advantage of "left" and "and" operations of the representation. Experiments show that the proposed algorithm only scans the music query stream once, and runs significant faster and consumes less memory than existing algorithms, such as SWFI-stream and Moment.

  9. Image recognition of clipped stigma traces in rice seeds

    NASA Astrophysics Data System (ADS)

    Cheng, F.; Ying, YB

    2005-11-01

    The objective of this research is to develop algorithm to recognize clipped stigma traces in rice seeds using image processing. At first, the micro-configuration of clipped stigma traces was observed with electronic scanning microscope. Then images of rice seeds were acquired with a color machine vision system. A digital image-processing algorithm based on morphological operations and Hough transform was developed to inspect the occurrence of clipped stigma traces. Five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou and you3207 were evaluated. The algorithm was implemented with all image sets using a Matlab 6.5 procedure. The results showed that the algorithm achieved an average accuracy of 96%. The algorithm was proved to be insensitive to the different rice seed varieties.

  10. A biological phantom for evaluation of CT image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.

    2014-03-01

    In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.

  11. Lung texture in serial thoracic CT scans: Assessment of change introduced by image registration1

    PubMed Central

    Cunliffe, Alexandra R.; Al-Hallaq, Hania A.; Labby, Zacariah E.; Pelizzari, Charles A.; Straus, Christopher; Sensakovic, William F.; Ludwig, Michelle; Armato, Samuel G.

    2012-01-01

    Purpose: The aim of this study was to quantify the effect of four image registration methods on lung texture features extracted from serial computed tomography (CT) scans obtained from healthy human subjects. Methods: Two chest CT scans acquired at different time points were collected retrospectively for each of 27 patients. Following automated lung segmentation, each follow-up CT scan was registered to the baseline scan using four algorithms: (1) rigid, (2) affine, (3) B-splines deformable, and (4) demons deformable. The registration accuracy for each scan pair was evaluated by measuring the Euclidean distance between 150 identified landmarks. On average, 1432 spatially matched 32 × 32-pixel region-of-interest (ROI) pairs were automatically extracted from each scan pair. First-order, fractal, Fourier, Laws’ filter, and gray-level co-occurrence matrix texture features were calculated in each ROI, for a total of 140 features. Agreement between baseline and follow-up scan ROI feature values was assessed by Bland–Altman analysis for each feature; the range spanned by the 95% limits of agreement of feature value differences was calculated and normalized by the average feature value to obtain the normalized range of agreement (nRoA). Features with small nRoA were considered “registration-stable.” The normalized bias for each feature was calculated from the feature value differences between baseline and follow-up scans averaged across all ROIs in every patient. Because patients had “normal” chest CT scans, minimal change in texture feature values between scan pairs was anticipated, with the expectation of small bias and narrow limits of agreement. Results: Registration with demons reduced the Euclidean distance between landmarks such that only 9% of landmarks were separated by ≥1 mm, compared with rigid (98%), affine (95%), and B-splines (90%). Ninety-nine of the 140 (71%) features analyzed yielded nRoA > 50% for all registration methods, indicating that the majority of feature values were perturbed following registration. Nineteen of the features (14%) had nRoA < 15% following demons registration, indicating relative feature value stability. Student's t-tests showed that the nRoA of these 19 features was significantly larger when rigid, affine, or B-splines registration methods were used compared with demons registration. Demons registration yielded greater normalized bias in feature value change than B-splines registration, though this difference was not significant (p = 0.15). Conclusions: Demons registration provided higher spatial accuracy between matched anatomic landmarks in serial CT scans than rigid, affine, or B-splines algorithms. Texture feature changes calculated in healthy lung tissue from serial CT scans were smaller following demons registration compared with all other algorithms. Though registration altered the values of the majority of texture features, 19 features remained relatively stable after demons registration, indicating their potential for detecting pathologic change in serial CT scans. Combined use of accurate deformable registration using demons and texture analysis may allow for quantitative evaluation of local changes in lung tissue due to disease progression or treatment response. PMID:22894392

  12. AdaBoost-based algorithm for network intrusion detection.

    PubMed

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  13. Parallel Algorithms and Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  14. The Role of miRNAs in the Progression of Prostate Cancer from Androgen-Dependent to Androgen-Independent Stages

    DTIC Science & Technology

    2012-09-01

    regulated by miR-99a/let7c/125b-2 cluster. Using bioinformatic prediction algorithm TargetScan, we identified 7 genes that are commonly targeted by miR-99a...HPeak, a Hidden Markov Model (HMM)-based peak identifying algorithm (http://www.sph.umich.edu/csg/qin/HPeak/). Seven AR binding sites were reported by...and ARBS2 by ALGGEN- PROMO, a matrix algorithm for predicting transcription factor binding sites based on TRANSFAC (http://alggen.lsi.upc.es/cgi- bin

  15. Depth-resolved analytical model and correction algorithm for photothermal optical coherence tomography

    PubMed Central

    Lapierre-Landry, Maryse; Tucker-Schwartz, Jason M.; Skala, Melissa C.

    2016-01-01

    Photothermal OCT (PT-OCT) is an emerging molecular imaging technique that occupies a spatial imaging regime between microscopy and whole body imaging. PT-OCT would benefit from a theoretical model to optimize imaging parameters and test image processing algorithms. We propose the first analytical PT-OCT model to replicate an experimental A-scan in homogeneous and layered samples. We also propose the PT-CLEAN algorithm to reduce phase-accumulation and shadowing, two artifacts found in PT-OCT images, and demonstrate it on phantoms and in vivo mouse tumors. PMID:27446693

  16. Automated choroidal neovascularization detection algorithm for optical coherence tomography angiography.

    PubMed

    Liu, Li; Gao, Simon S; Bailey, Steven T; Huang, David; Li, Dengwang; Jia, Yali

    2015-09-01

    Optical coherence tomography angiography has recently been used to visualize choroidal neovascularization (CNV) in participants with age-related macular degeneration. Identification and quantification of CNV area is important clinically for disease assessment. An automated algorithm for CNV area detection is presented in this article. It relies on denoising and a saliency detection model to overcome issues such as projection artifacts and the heterogeneity of CNV. Qualitative and quantitative evaluations were performed on scans of 7 participants. Results from the algorithm agreed well with manual delineation of CNV area.

  17. Fast Nonparametric Machine Learning Algorithms for High-Dimensional Massive Data and Applications

    DTIC Science & Technology

    2006-03-01

    know the probability of that from Lemma 2. Using the union bound, we know that for any query q, the probability that i-am-feeling-lucky search algorithm...and each point in a d-dimensional space, a naive k-NN search needs to do a linear scan of T for every single query q, and thus the computational time...algorithm based on partition trees with priority search , and give an expected query time O((1/)d log n). But the constant in the O((1/)d log n

  18. Improved motion compensation in 3D-CT using respiratory-correlated segment reconstruction: diagnostic and radiotherapy applications.

    PubMed

    Mori, S; Endo, M; Kohno, R; Minohara, S

    2006-09-01

    Conventional respiratory-gated CT and four-dimensional CT (4DCT) are disadvantaged by their low temporal resolution, which results in the inclusion of anatomic motion-induced artefacts. These represent a significant source of error both in radiotherapy treatment planning for the thorax and upper abdomen and in diagnostic procedures. In particular, temporal resolution and image quality are vitally important to accurate diagnosis and the minimization of planning target volume margin due to respiratory motion. To improve both temporal resolution and signal-to-noise ratio (SNR), we developed a respiratory-correlated segment reconstruction method (RS) and adapted it to the Feldkamp-Davis-Kress algorithm (FDK) with a 256 multidetector row CT (256MDCT). The 256MDCT scans approximately 100 mm in the craniocaudal direction with a 0.5 mm slice thickness in one rotation. Data acquisition for the RS-FDK relies on the assistance of a respiratory sensing system operating in cine scan mode (continuous axial scan with the table stationary). We evaluated the RS-FDK for volume accuracy and image noise in a phantom study with the 256MDCT and compared results with those for a full scan (FS-FDK), which is usually employed in conventional 4DCT and in half scan (HS-FDK). Results showed that the RS-FDK gave a more accurate volume than the others and had the same SNR as the FS-FDK. In a subsequent animal study, we demonstrated a practical sorting process for projection data which was unaffected by variations in respiratory period, and found that the RS-FDK gave the clearest visualization among the three algorithms of the margins of the liver and pulmonary vessels. In summary, the RS-FDK algorithm provides multi-phase images with higher temporal resolution and better SNR. This method should prove useful when combined with new radiotherapeutic and diagnostic techniques.

  19. Detecting active pelvic arterial haemorrhage on admission following serious pelvic fracture in multiple trauma patients.

    PubMed

    Brun, Julien; Guillot, Stéphanie; Bouzat, Pierre; Broux, Christophe; Thony, Frédéric; Genty, Céline; Heylbroeck, Christophe; Albaladejo, Pierre; Arvieux, Catherine; Tonetti, Jérôme; Payen, Jean-Francois

    2014-01-01

    The early diagnosis of pelvic arterial haemorrhage is challenging for initiating treatment by transcatheter arterial embolization (TAE) in multiple trauma patients. We use an institutional algorithm focusing on haemodynamic status on admission and on a whole-body CT scan in stabilized patients to screen patients requiring TAE. This study aimed to assess the effectiveness of this approach. This retrospective cohort study included 106 multiple trauma patients admitted to the emergency room with serious pelvic fracture [pelvic abbreviated injury scale (AIS) score of 3 or more]. Of the 106 patients, 27 (25%) underwent pelvic angiography leading to TAE for active arterial haemorrhage in 24. The TAE procedure was successful within 3h of arrival in 18 patients. In accordance with the algorithm, 10 patients were directly admitted to the angiography unit (n=8) and/or operating room (n=2) for uncontrolled haemorrhagic shock on admission. Of the remaining 96 stabilized patients, 20 had contrast media extravasation on pelvic CT scan that prompted pelvic angiography in 16 patients leading to TAE in 14. One patient underwent a pelvic angiography despite showing no contrast media extravasation on pelvic CT scan. All 17 stabilized patients who underwent pelvic angiography presented a more severely compromised haemodynamic status on admission, and they required more blood products during their initial management than the 79 patients who did not undergo pelvic angiography. The incidence of unstable pelvic fractures was however comparable between the two groups. Overall, haemodynamic instability and contrast media extravasation on the CT-scan identified 26 out of the 27 patients who required subsequent pelvic angiography leading to TAE in 24. An algorithm focusing on haemodynamic status on arrival and on the whole-body CT scan in stabilized patients may be effective at triaging multiple trauma patients with serious pelvic fractures. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Registration of prone and supine CT colonography scans using correlation optimized warping and canonical correlation analysis

    PubMed Central

    Wang, Shijun; Yao, Jianhua; Liu, Jiamin; Petrick, Nicholas; Van Uitert, Robert L.; Periaswamy, Senthil; Summers, Ronald M.

    2009-01-01

    Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice—Once supine and once prone—to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined by the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27±52.97 to 14.98 mm±11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline. PMID:20095272

  1. Registration of prone and supine CT colonography scans using correlation optimized warping and canonical correlation analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Shijun; Yao Jianhua; Liu Jiamin

    Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice--Once supine and once prone--to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined bymore » the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27{+-}52.97 to 14.98 mm{+-}11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline.« less

  2. NAVIS-An UGV Indoor Positioning System Using Laser Scan Matching for Large-Area Real-Time Applications

    PubMed Central

    Tang, Jian.; Chen, Yuwei.; Jaakkola, Anttoni.; Liu, Jinbing.; Hyyppä, Juha.; Hyyppä, Hannu.

    2014-01-01

    Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application. PMID:24999715

  3. NAVIS-An UGV indoor positioning system using laser scan matching for large-area real-time applications.

    PubMed

    Tang, Jian; Chen, Yuwei; Jaakkola, Anttoni; Liu, Jinbing; Hyyppä, Juha; Hyyppä, Hannu

    2014-07-04

    Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application.

  4. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations

    NASA Astrophysics Data System (ADS)

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-10-01

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance  =  3%, distance-to-agreement  =  3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.

  5. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations.

    PubMed

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-09-12

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2 mm. In an anthropomorphic phantom, the gamma index (dose tolerance  =  3%, distance-to-agreement  =  3 mm) was greater than 90% for six out of seven planes using the RS-MC, and three out seven for the RS-PBA. The RS-MC algorithm demonstrated improved dosimetric accuracy over the RS-PBA in the presence of homogenous, heterogeneous and anthropomorphic phantoms. The computation performance of the RS-MC was similar to the RS-PBA algorithm. For complex disease sites like breast, head and neck, and lung cancer, the RS-MC algorithm will provide significantly more accurate treatment planning.

  6. NOTE: A BPF-type algorithm for CT with a curved PI detector

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping

    2006-08-01

    Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.

  7. A BPF-type algorithm for CT with a curved PI detector.

    PubMed

    Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping

    2006-08-21

    Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.

  8. FDG-PET and CSF biomarker accuracy in prediction of conversion to different dementias in a large multicentre MCI cohort.

    PubMed

    Caminiti, Silvia Paola; Ballarini, Tommaso; Sala, Arianna; Cerami, Chiara; Presotto, Luca; Santangelo, Roberto; Fallanca, Federico; Vanoli, Emilia Giovanna; Gianolli, Luigi; Iannaccone, Sandro; Magnani, Giuseppe; Perani, Daniela

    2018-01-01

    In this multicentre study in clinical settings, we assessed the accuracy of optimized procedures for FDG-PET brain metabolism and CSF classifications in predicting or excluding the conversion to Alzheimer's disease (AD) dementia and non-AD dementias. We included 80 MCI subjects with neurological and neuropsychological assessments, FDG-PET scan and CSF measures at entry, all with clinical follow-up. FDG-PET data were analysed with a validated voxel-based SPM method. Resulting single-subject SPM maps were classified by five imaging experts according to the disease-specific patterns, as "typical-AD", "atypical-AD" (i.e. posterior cortical atrophy, asymmetric logopenic AD variant, frontal-AD variant), "non-AD" (i.e. behavioural variant FTD, corticobasal degeneration, semantic variant FTD; dementia with Lewy bodies) or "negative" patterns. To perform the statistical analyses, the individual patterns were grouped either as "AD dementia vs. non-AD dementia (all diseases)" or as "FTD vs. non-FTD (all diseases)". Aβ42, total and phosphorylated Tau CSF-levels were classified dichotomously, and using the Erlangen Score algorithm. Multivariate logistic models tested the prognostic accuracy of FDG-PET-SPM and CSF dichotomous classifications. Accuracy of Erlangen score and Erlangen Score aided by FDG-PET SPM classification was evaluated. The multivariate logistic model identified FDG-PET "AD" SPM classification (Expβ = 19.35, 95% C.I. 4.8-77.8, p < 0.001) and CSF Aβ42 (Expβ = 6.5, 95% C.I. 1.64-25.43, p < 0.05) as the best predictors of conversion from MCI to AD dementia. The "FTD" SPM pattern significantly predicted conversion to FTD dementias at follow-up (Expβ = 14, 95% C.I. 3.1-63, p < 0.001). Overall, FDG-PET-SPM classification was the most accurate biomarker, able to correctly differentiate either the MCI subjects who converted to AD or FTD dementias, and those who remained stable or reverted to normal cognition (Expβ = 17.9, 95% C.I. 4.55-70.46, p < 0.001). Our results support the relevant role of FDG-PET-SPM classification in predicting progression to different dementia conditions in prodromal MCI phase, and in the exclusion of progression, outperforming CSF biomarkers.

  9. WE-AB-303-08: Direct Lung Tumor Tracking Using Short Imaging Arcs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shieh, C; Huang, C; Keall, P

    2015-06-15

    Purpose: Most current tumor tracking technologies rely on implanted markers, which suffer from potential toxicity of marker placement and mis-targeting due to marker migration. Several markerless tracking methods have been proposed: these are either indirect methods or have difficulties tracking lung tumors in most clinical cases due to overlapping anatomies in 2D projection images. We propose a direct lung tumor tracking algorithm robust to overlapping anatomies using short imaging arcs. Methods: The proposed algorithm tracks the tumor based on kV projections acquired within the latest six-degree imaging arc. To account for respiratory motion, an external motion surrogate is used tomore » select projections of the same phase within the latest arc. For each arc, the pre-treatment 4D cone-beam CT (CBCT) with tumor contours are used to estimate and remove the contribution to the integral attenuation from surrounding anatomies. The position of the tumor model extracted from 4D CBCT of the same phase is then optimized to match the processed projections using the conjugate gradient method. The algorithm was retrospectively validated on two kV scans of a lung cancer patient with implanted fiducial markers. This patient was selected as the tumor is attached to the mediastinum, representing a challenging case for markerless tracking methods. The tracking results were converted to expected marker positions and compared with marker trajectories obtained via direct marker segmentation (ground truth). Results: The root-mean-squared-errors of tracking were 0.8 mm and 0.9 mm in the superior-inferior direction for the two scans. Tracking error was found to be below 2 and 3 mm for 90% and 98% of the time, respectively. Conclusions: A direct lung tumor tracking algorithm robust to overlapping anatomies was proposed and validated on two scans of a lung cancer patient. Sub-millimeter tracking accuracy was observed, indicating the potential of this algorithm for real-time guidance applications.« less

  10. Comparison of Two Detection Combination Algorithms for Phased Array Radars

    DTIC Science & Technology

    2015-07-01

    data were generated by a simulator of multi-function radar ( MFR ) and the combination algorithms are evaluated with the recorded simulation data. With...electronically scanned phased array Multi-Function Radar ( MFR ), is a type of radar whose transmitter and receiver functions are composed of numerous...small transmit/receive modules. An MFR can perform many functions previously performed by individual, dedicated radars for search, tracking and

  11. ScanSAR interferometric processing using existing standard InSAR software for measuring large scale land deformation

    NASA Astrophysics Data System (ADS)

    Liang, Cunren; Zeng, Qiming; Jia, Jianying; Jiao, Jian; Cui, Xi'ai

    2013-02-01

    Scanning synthetic aperture radar (ScanSAR) mode is an efficient way to map large scale geophysical phenomena at low cost. The work presented in this paper is dedicated to ScanSAR interferometric processing and its implementation by making full use of existing standard interferometric synthetic aperture radar (InSAR) software. We first discuss the properties of the ScanSAR signal and its phase-preserved focusing using the full aperture algorithm in terms of interferometry. Then a complete interferometric processing flow is proposed. The standard ScanSAR product is decoded subswath by subswath with burst gaps padded with zero-pulses, followed by a Doppler centroid frequency estimation for each subswath and a polynomial fit of all of the subswaths for the whole scene. The burst synchronization of the interferometric pair is then calculated, and only the synchronized pulses are kept for further interferometric processing. After the complex conjugate multiplication of the interferometric pair, the residual non-integer pulse repetition interval (PRI) part between adjacent bursts caused by zero padding is compensated by resampling using a sinc kernel. The subswath interferograms are then mosaicked, in which a method is proposed to remove the subswath discontinuities in the overlap area. Then the following interferometric processing goes back to the traditional stripmap processing flow. A processor written with C and Fortran languages and controlled by Perl scripts is developed to implement these algorithms and processing flow based on the JPL/Caltech Repeat Orbit Interferometry PACkage (ROI_PAC). Finally, we use the processor to process ScanSAR data from the Envisat and ALOS satellites and obtain large scale deformation maps in the radar line-of-sight (LOS) direction.

  12. Application of the LDM algorithm to identify small lung nodules on low-dose MSCT scans

    NASA Astrophysics Data System (ADS)

    Zhao, Binsheng; Ginsberg, Michelle S.; Lefkowitz, Robert A.; Jiang, Li; Cooper, Cathleen; Schwartz, Lawrence H.

    2004-05-01

    In this work, we present a computer-aided detection (CAD) algorithm for small lung nodules on low-dose MSCT images. With this technique, identification of potential lung nodules is carried out with a local density maximum (LDM) algorithm, followed by reduction of false positives from the nodule candidates using task-specific 2-D/3-D features along with a knowledge-based nodule inclusion/exclusion strategy. Twenty-eight MSCT scans (40/80mAs, 120kVp, 5mm collimation/2.5mm reconstruction) from our lung cancer screening program that included at least one lung nodule were selected for this study. Two radiologists independently interpreted these cases. Subsequently, a consensus reading by both radiologists and CAD was generated to define a "gold standard". In total, 165 nodules were considered as the "gold standard" (average: 5.9 nodules/case; range: 1-22 nodules/case). The two radiologists detected 146 nodules (88.5%) and CAD detected 100 nodules (60.6%) with 8.7 false-positives/case. CAD detected an additional 19 nodules (6 nodules > 3mm and 13 nodules < 3mm) that had been missed by both radiologists. Preliminary results show that the CAD is capable of detecting small lung nodules with acceptable number of false-positives on low-dose MSCT scans and it can detect nodules that are otherwise missed by radiologists, though a majority are small nodules (< 3mm).

  13. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  14. Computer-aided screening system for cervical precancerous cells based on field emission scanning electron microscopy and energy dispersive x-ray images and spectra

    NASA Astrophysics Data System (ADS)

    Jusman, Yessi; Ng, Siew-Cheok; Hasikin, Khairunnisa; Kurnia, Rahmadi; Osman, Noor Azuan Bin Abu; Teoh, Kean Hooi

    2016-10-01

    The capability of field emission scanning electron microscopy and energy dispersive x-ray spectroscopy (FE-SEM/EDX) to scan material structures at the microlevel and characterize the material with its elemental properties has inspired this research, which has developed an FE-SEM/EDX-based cervical cancer screening system. The developed computer-aided screening system consisted of two parts, which were the automatic features of extraction and classification. For the automatic features extraction algorithm, the image and spectra of cervical cells features extraction algorithm for extracting the discriminant features of FE-SEM/EDX data was introduced. The system automatically extracted two types of features based on FE-SEM/EDX images and FE-SEM/EDX spectra. Textural features were extracted from the FE-SEM/EDX image using a gray level co-occurrence matrix technique, while the FE-SEM/EDX spectra features were calculated based on peak heights and corrected area under the peaks using an algorithm. A discriminant analysis technique was employed to predict the cervical precancerous stage into three classes: normal, low-grade intraepithelial squamous lesion (LSIL), and high-grade intraepithelial squamous lesion (HSIL). The capability of the developed screening system was tested using 700 FE-SEM/EDX spectra (300 normal, 200 LSIL, and 200 HSIL cases). The accuracy, sensitivity, and specificity performances were 98.2%, 99.0%, and 98.0%, respectively.

  15. Automated circumferential construction of first-order aqueous humor outflow pathways using spectral-domain optical coherence tomography.

    PubMed

    Huang, Alex S; Belghith, Akram; Dastiridou, Anna; Chopra, Vikas; Zangwill, Linda M; Weinreb, Robert N

    2017-06-01

    The purpose was to create a three-dimensional (3-D) model of circumferential aqueous humor outflow (AHO) in a living human eye with an automated detection algorithm for Schlemm’s canal (SC) and first-order collector channels (CC) applied to spectral-domain optical coherence tomography (SD-OCT). Anterior segment SD-OCT scans from a subject were acquired circumferentially around the limbus. A Bayesian Ridge method was used to approximate the location of the SC on infrared confocal laser scanning ophthalmoscopic images with a cross multiplication tool developed to initiate SC/CC detection automated through a fuzzy hidden Markov Chain approach. Automatic segmentation of SC and initial CC’s was manually confirmed by two masked graders. Outflow pathways detected by the segmentation algorithm were reconstructed into a 3-D representation of AHO. Overall, only <1% of images (5114 total B-scans) were ungradable. Automatic segmentation algorithm performed well with SC detection 98.3% of the time and <0.1% false positive detection compared to expert grader consensus. CC was detected 84.2% of the time with 1.4% false positive detection. 3-D representation of AHO pathways demonstrated variably thicker and thinner SC with some clear CC roots. Circumferential (360 deg), automated, and validated AHO detection of angle structures in the living human eye with reconstruction was possible.

  16. The model of encryption algorithm based on non-positional polynomial notations and constructed on an SP-network

    NASA Astrophysics Data System (ADS)

    Kapalova, N.; Haumen, A.

    2018-05-01

    This paper addresses to structures and properties of the cryptographic information protection algorithm model based on NPNs and constructed on an SP-network. The main task of the research is to increase the cryptostrength of the algorithm. In the paper, the transformation resulting in the improvement of the cryptographic strength of the algorithm is described in detail. The proposed model is based on an SP-network. The reasons for using the SP-network in this model are the conversion properties used in these networks. In the encryption process, transformations based on S-boxes and P-boxes are used. It is known that these transformations can withstand cryptanalysis. In addition, in the proposed model, transformations that satisfy the requirements of the "avalanche effect" are used. As a result of this work, a computer program that implements an encryption algorithm model based on the SP-network has been developed.

  17. Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.

    PubMed

    Wang, Charlie C L; Manocha, Dinesh

    2013-01-01

    We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.

  18. Broadband Gerchberg-Saxton algorithm for freeform diffractive spectral filter design.

    PubMed

    Vorndran, Shelby; Russo, Juan M; Wu, Yuechen; Pelaez, Silvana Ayala; Kostuk, Raymond K

    2015-11-30

    A multi-wavelength expansion of the Gerchberg-Saxton (GS) algorithm is developed to design and optimize a surface relief Diffractive Optical Element (DOE). The DOE simultaneously diffracts distinct wavelength bands into separate target regions. A description of the algorithm is provided, and parameters that affect filter performance are examined. Performance is based on the spectral power collected within specified regions on a receiver plane. The modified GS algorithm is used to design spectrum splitting optics for CdSe and Si photovoltaic (PV) cells. The DOE has average optical efficiency of 87.5% over the spectral bands of interest (400-710 nm and 710-1100 nm). Simulated PV conversion efficiency is 37.7%, which is 29.3% higher than the efficiency of the better performing PV cell without spectrum splitting optics.

  19. The effect of 18F-FDG-PET image reconstruction algorithms on the expression of characteristic metabolic brain network in Parkinson's disease.

    PubMed

    Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja

    2017-09-01

    To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (p<0.0001), regardless of reconstruction algorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, p<0.0001). Average differences in the PDRP expression among different algorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  1. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  2. Lead-Free Inverted Planar Formamidinium Tin Triiodide Perovskite Solar Cells Achieving Power Conversion Efficiencies up to 6.22.

    PubMed

    Liao, Weiqiang; Zhao, Dewei; Yu, Yue; Grice, Corey R; Wang, Changlei; Cimaroli, Alexander J; Schulz, Philip; Meng, Weiwei; Zhu, Kai; Xiong, Ren-Gen; Yan, Yanfa

    2016-11-01

    Efficient lead (Pb)-free inverted planar formamidinium tin triiodide (FASnI 3 ) perovskite solar cells (PVSCs) are demonstrated. Our FASnI 3 PVSCs achieved average power conversion efficiencies (PCEs) of 5.41% ± 0.46% and a maximum PCE of 6.22% under forward voltage scan. The PVSCs exhibit small photocurrent-voltage hysteresis and high reproducibility. The champion cell shows a steady-state efficiency of ≈6.00% for over 100 s. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Efficient sintering of nanocrystalline titanium dioxide films for dye solar cells via raster scanning laser

    NASA Astrophysics Data System (ADS)

    Mincuzzi, Girolamo; Vesce, Luigi; Reale, Andrea; Di Carlo, Aldo; Brown, Thomas M.

    2009-09-01

    By identifying the right combination of laser parameters, in particular the integrated laser fluence Φ, we fabricated dye solar cells (DSCs) with UV laser-sintered TiO2 films exhibiting a power conversion efficiency η =5.2%, the highest reported for laser-sintered devices. η is dramatically affected by Φ and a clear trend is reported. Significantly, DSCs fabricated by raster scanning the laser beam to sinter the TiO2 films are made as efficient as those with oven-sintered ones. These results, confirmed on three batches of cells, demonstrate the remarkable potential (noncontact, local, low cost, rapid, selective, and scalable) of scanning laser processing applied to DSC technology.

  4. Determining the 3-D structure and motion of objects using a scanning laser range sensor

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Smith, Philip W.

    1993-01-01

    In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.

  5. Calcium scoring with dual-energy CT in men and women: an anthropomorphic phantom study

    NASA Astrophysics Data System (ADS)

    Li, Qin; Liu, Songtao; Myers, Kyle; Gavrielides, Marios A.; Zeng, Rongping; Sahiner, Berkman; Petrick, Nicholas

    2016-03-01

    This work aimed to quantify and compare the potential impact of gender differences on coronary artery calcium scoring with dual-energy CT. An anthropomorphic thorax phantom with four synthetic heart vessels (diameter 3-4.5 mm: female/male left main and left circumflex artery) were scanned with and without female breast plates. Ten repeat scans were acquired in both single- and dual-energy modes and reconstructed at six reconstruction settings: two slice thicknesses (3 mm, 0.6 mm) and three reconstruction algorithms (FBP, IR3, IR5). Agatston and calcium volume scores were estimated from the reconstructed data using a segmentation-based approach. Total calcium score (summation of four vessels), and male/female calcium scores (summation of male/female vessels scanned in phantom without/with breast plates) were calculated accordingly. Both Agatston and calcium volume scores were found comparable between single- and dual-energy scans (Pearson r= 0.99, p<0.05). The total calcium scores were larger for the thinner slice thickness. Among the scores obtained from the three reconstruction algorithms, FBP yielded the highest and IR5 yielded the lowest scores. The total calcium scores from the phantom without breast plates were significantly larger than those from the phantom with breast plates, and the difference increased with the stronger denoising in iterative algorithm and with thicker slices. Both gender-based anatomical differences and vessel size impacted the calcium scores. The calcium volume scores tended to be underestimated when the vessels were smaller. These findings are valuable for understanding inconsistencies between women and men in calcium scoring, and for standardizing imaging protocols for improved gender-specific calcium scoring.

  6. Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.

    PubMed

    Schreibmann, Eduard; Marcus, David M; Fox, Tim

    2014-07-08

    Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction.

  7. [Evaluation of three methods for constructing craniofacial mid-sagittal plane based on the cone beam computed tomography].

    PubMed

    Wang, S W; Li, M; Yang, H F; Zhao, Y J; Wang, Y; Liu, Y

    2016-04-18

    To compare the accuracyof interactive closet point (ICP) algorithm, Procrustes analysis (PA) algorithm,and a landmark-independent method to construct the mid-sagittal plane (MSP) of the cone beam computed tomography.To provide theoretical basis for establishing coordinate systemof CBCT images and symmetric analysis. Ten patients were selected and scanned by CBCT before orthodontic treatment.The scan data was imported into Mimics 10.0 to reconstructthree dimensional skulls.And the MSP of each skull was generated by ICP algorithm, PA algorithm and landmark-independent method. MSP extracted by ICP algorithm or PA algorithm involvedthree steps. First, the 3D skull processing was performed by reverse engineering software geomagic studio 2012 to obtain the mirror skull. Then, the original and its mirror skull was registered separately by ICP algorithm in geomagic studio 2012 and PA algorithm in NX Imageware 11.0. Finally, the registered data were united into new data to calculate the MSP of the originaldata in geomagic studio 2012. The mid-sagittal plane was determined by SELLA (S), nasion (N), basion (Ba) as traditional landmark-dependent methodconducted in software InVivoDental 5.0. The distance from 9 pairs of symmetric anatomical marked points to three sagittal plane were measured and calculated to compare the differences of the absolute value. The one-way ANOVA test was used to analyze the variable differences among the 3 MSPs. The pairwise comparison was performed with LSD method. MSPs calculated by the three methods were available for clinic analysis, which could be concluded from the front view.However, there was significant differences among the distances from the 9 pairs of symmetric anatomical marked points to the MSPs (F=10.932,P=0.001).LSD test showed there was no significant difference between the ICP algorithm and landmark-independent method (P=0.11), while there was significant difference between the PA algorithm and landmark-independent methods (P=0.01) . Mid-sagittal plane of 3D skulls could be generated base on ICP algorithm or PA algorithm. There was no significant difference between the ICP algorithm and landmark-independent method. For the subjects with no evident asymmetry, ICP algorithm is feasible in clinical analysis.

  8. Dynamic magnetic resonance imaging method based on golden-ratio cartesian sampling and compressed sensing.

    PubMed

    Li, Shuo; Zhu, Yanchun; Xie, Yaoqin; Gao, Song

    2018-01-01

    Dynamic magnetic resonance imaging (DMRI) is used to noninvasively trace the movements of organs and the process of drug delivery. The results can provide quantitative or semiquantitative pathology-related parameters, thus giving DMRI great potential for clinical applications. However, conventional DMRI techniques suffer from low temporal resolution and long scan time owing to the limitations of the k-space sampling scheme and image reconstruction algorithm. In this paper, we propose a novel DMRI sampling scheme based on a golden-ratio Cartesian trajectory in combination with a compressed sensing reconstruction algorithm. The results of two simulation experiments, designed according to the two major DMRI techniques, showed that the proposed method can improve the temporal resolution and shorten the scan time and provide high-quality reconstructed images.

  9. Autonomous intelligent assembly systems LDRD 105746 final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2013-04-01

    This report documents a three-year to develop technology that enables mobile robots to perform autonomous assembly tasks in unstructured outdoor environments. This is a multi-tier problem that requires an integration of a large number of different software technologies including: command and control, estimation and localization, distributed communications, object recognition, pose estimation, real-time scanning, and scene interpretation. Although ultimately unsuccessful in achieving a target brick stacking task autonomously, numerous important component technologies were nevertheless developed. Such technologies include: a patent-pending polygon snake algorithm for robust feature tracking, a color grid algorithm for uniquely identification and calibration, a command and control frameworkmore » for abstracting robot commands, a scanning capability that utilizes a compact robot portable scanner, and more. This report describes this project and these developed technologies.« less

  10. Enhanced capture rate for haze defects in production wafer inspection

    NASA Astrophysics Data System (ADS)

    Auerbach, Ditza; Shulman, Adi; Rozentsvige, Moshe

    2010-03-01

    Photomask degradation via haze defect formation is an increasing troublesome yield problem in the semiconductor fab. Wafer inspection is often utilized to detect haze defects due to the fact that it can be a bi-product of process control wafer inspection; furthermore, the detection of the haze on the wafer is effectively enhanced due to the multitude of distinct fields being scanned. In this paper, we demonstrate a novel application for enhancing the wafer inspection tool's sensitivity to haze defects even further. In particular, we present results of bright field wafer inspection using the on several photo layers suffering from haze defects. One way in which the enhanced sensitivity can be achieved in inspection tools is by using a double scan of the wafer: one regular scan with the normal recipe and another high sensitivity scan from which only the repeater defects are extracted (the non-repeater defects consist largely of noise which is difficult to filter). Our solution essentially combines the double scan into a single high sensitivity scan whose processing is carried out along two parallel routes (see Fig. 1). Along one route, potential defects follow the standard recipe thresholds to produce a defect map at the nominal sensitivity. Along the alternate route, potential defects are used to extract only field repeater defects which are identified using an optimal repeater algorithm that eliminates "false repeaters". At the end of the scan, the two defect maps are merged into one with optical scan images available for all the merged defects. It is important to note, that there is no throughput hit; in addition, the repeater sensitivity is increased relative to a double scan, due to a novel runtime algorithm implementation whose memory requirements are minimized, thus enabling to search a much larger number of potential defects for repeaters. We evaluated the new application on photo wafers which consisted of both random and haze defects. The evaluation procedure involved scanning with three different recipe types: Standard Inspection: Nominal recipe with a low false alarm rate was used to scan the wafer and repeaters were extracted from the final defect map. Haze Monitoring Application: Recipe sensitivity was enhanced and run on a single field column from which on repeating defects were extracted. Enhanced Repeater Extractor: Defect processing included the two parallel routes: a nominal recipe for the random defects and the new high sensitive repeater extractor algorithm. The results showed that the new application (recipe #3) had the highest capture rate on haze defects and detected new repeater defects not found in the first two recipes. In addition, the recipe was much simpler to setup since repeaters are filtered separately from random defects. We expect that in the future, with the advent of mask-less lithography and EUV lithography, the monitoring of field and die repeating defects on the wafer will become a necessity for process control in the semiconductor fab.

  11. Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr

    2016-03-01

    The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.

  12. [Preliminary application of an improved Demons deformable registration algorithm in tumor radiotherapy].

    PubMed

    Zhou, Lu; Zhen, Xin; Lu, Wenting; Dou, Jianhong; Zhou, Linghong

    2012-01-01

    To validate the efficiency of an improved Demons deformable registration algorithm and evaluate its application in registration of the treatment image and the planning image in image-guided radiotherapy (IGRT). Based on Brox's gradient constancy assumption and Malis's efficient second-order minimization algorithm, a grey value gradient similarity term was added into the original energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function for automatic determination of the iteration number. The proposed algorithm was validated using mathematically deformed images, physically deformed phantom images and clinical tumor images. Compared with the original Additive Demons algorithm, the improved Demons algorithm achieved a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. The improved Demons algorithm can achieve faster and more accurate radiotherapy.

  13. Change descriptors for determining nodule malignancy in national lung screening trial CT screening images

    NASA Astrophysics Data System (ADS)

    Geiger, Benjamin; Hawkins, Samuel; Hall, Lawrence O.; Goldgof, Dmitry B.; Balagurunathan, Yoganand; Gatenby, Robert A.; Gillies, Robert J.

    2016-03-01

    Pulmonary nodules are effectively diagnosed in CT scans, but determining their malignancy has been a challenge. The rate of change of the volume of a pulmonary nodule is known to be a prognostic factor for cancer development. In this study, we propose that other changes in imaging characteristics are similarly informative. We examined the combination of image features across multiple CT scans, taken from the National Lung Screening Trial, with individual scans of the same patient separated by approximately one year. By subtracting the values of existing features in multiple scans for the same patient, we were able to improve the ability of existing classification algorithms to determine whether a nodule will become malignant. We trained each classifier on 83 nodules determined to be malignant by biopsy and 172 nodules determined to be benign by their clinical stability through two years of no change; classifiers were tested on 77 malignant and 144 benign nodules, using a set of features that in a test-retest experiment were shown to be stable. An accuracy of 83.71% and AUC of 0.814 were achieved with the Random Forests classifier on a subset of features determined to be stable via test-retest reproducibility analysis, further reduced with the Correlation-based Feature Selection algorithm.

  14. Remote assessment of ocean color for interpretation of satellite visible imagery: A review

    NASA Technical Reports Server (NTRS)

    Gordon, H. R.; Morel, A. Y.

    1983-01-01

    An assessment is presented of the state-of-the-art of remote, (satellite-based) Coastal Zone Color (CZCS) Scanning of color variations in the ocean due to phytoplankton. Attention is given to physical problems associated with ocean color remote sensing, in-water algorithms for the correction of atmospheric effects, constituent retrieval algorithms and application of the algorithms to CZCS imagery. The applicability of CZCS to both near-coast and mid-ocean waters is considered, and it is concluded that while differences between the two environments are complex, universal algorithms can be used for the case of mid-ocean waters, and site-specific algorithms are adequate for CZCS imaging of the near-coast oceanic environment. A short description of CVCS and some sample photographs are provided in an appendix.

  15. A new algorithm for agile satellite-based acquisition operations

    NASA Astrophysics Data System (ADS)

    Bunkheila, Federico; Ortore, Emiliano; Circi, Christian

    2016-06-01

    Taking advantage of the high manoeuvrability and the accurate pointing of the so-called agile satellites, an algorithm which allows efficient management of the operations concerning optical acquisitions is described. Fundamentally, this algorithm can be subdivided into two parts: in the first one the algorithm operates a geometric classification of the areas of interest and a partitioning of these areas into stripes which develop along the optimal scan directions; in the second one it computes the succession of the time windows in which the acquisition operations of the areas of interest are feasible, taking into consideration the potential restrictions associated with these operations and with the geometric and stereoscopic constraints. The results and the performances of the proposed algorithm have been determined and discussed considering the case of the Periodic Sun-Synchronous Orbits.

  16. Robust crop and weed segmentation under uncontrolled outdoor illumination

    USDA-ARS?s Scientific Manuscript database

    A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...

  17. The introduction of capillary structures in 4D simulated vascular tree for ART 3.5D algorithm further validation

    NASA Astrophysics Data System (ADS)

    Barra, Beatrice; El Hadji, Sara; De Momi, Elena; Ferrigno, Giancarlo; Cardinale, Francesco; Baselli, Giuseppe

    2017-03-01

    Several neurosurgical procedures, such as Artero Venous Malformations (AVMs), aneurysm embolizations and StereoElectroEncephaloGraphy (SEEG) require accurate reconstruction of the cerebral vascular tree, as well as the classification of arteries and veins, in order to increase the safety of the intervention. Segmentation of arteries and veins from 4D CT perfusion scans has already been proposed in different studies. Nonetheless, such procedures require long acquisition protocols and the radiation dose given to the patient is not negligible. Hence, space is open to approaches attempting to recover the dynamic information from standard Contrast Enhanced Cone Beam Computed Tomography (CE-CBCT) scans. The algorithm proposed by our team is called ART 3.5 D. It is a novel algorithm based on the postprocessing of both the angiogram and the raw data of a standard Digital Subtraction Angiography from a CBCT (DSACBCT) allowing arteries and veins segmentation and labeling without requiring any additional radiation exposure for the patient and neither lowering the resolution. In addition, while in previous versions of the algorithm just the distinction of arteries and veins was considered, here the capillary phase simulation and identification is introduced, in order to increase further information useful for more precise vasculature segmentation.

  18. Alternative techniques for high-resolution spectral estimation of spectrally encoded endoscopy

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Duan, Lian; Javidi, Tara; Ellerbee, Audrey K.

    2015-09-01

    Spectrally encoded endoscopy (SEE) is a minimally invasive optical imaging modality capable of fast confocal imaging of internal tissue structures. Modern SEE systems use coherent sources to image deep within the tissue and data are processed similar to optical coherence tomography (OCT); however, standard processing of SEE data via the Fast Fourier Transform (FFT) leads to degradation of the axial resolution as the bandwidth of the source shrinks, resulting in a well-known trade-off between speed and axial resolution. Recognizing the limitation of FFT as a general spectral estimation algorithm to only take into account samples collected by the detector, in this work we investigate alternative high-resolution spectral estimation algorithms that exploit information such as sparsity and the general region position of the bulk sample to improve the axial resolution of processed SEE data. We validate the performance of these algorithms using bothMATLAB simulations and analysis of experimental results generated from a home-built OCT system to simulate an SEE system with variable scan rates. Our results open a new door towards using non-FFT algorithms to generate higher quality (i.e., higher resolution) SEE images at correspondingly fast scan rates, resulting in systems that are more accurate and more comfortable for patients due to the reduced image time.

  19. Processing of fetal heart rate through non-invasive adaptive system based on recursive least squares algorithm

    NASA Astrophysics Data System (ADS)

    Fajkus, Marcel; Nedoma, Jan; Martinek, Radek; Vasinek, Vladimir

    2017-10-01

    In this article, we describe an innovative non-invasive method of Fetal Phonocardiography (fPCG) using fiber-optic sensors and adaptive algorithm for the measurement of fetal heart rate (fHR). Conventional PCG is based on a noninvasive scanning of acoustic signals by means of a microphone placed on the thorax. As for fPCG, the microphone is placed on the maternal abdomen. Our solution is based on patent pending non-invasive scanning of acoustic signals by means of a fiber-optic interferometer. Fiber-optic sensors are resistant to technical artifacts such as electromagnetic interferences (EMI), thus they can be used in situations where it is impossible to use conventional EFM methods, e.g. during Magnetic Resonance Imaging (MRI) examination or in case of delivery in water. The adaptive evaluation system is based on Recursive least squares (RLS) algorithm. Based on real measurements provided on five volunteers with their written consent, we created a simplified dynamic signal model of a distribution of heartbeat sounds (HS) through the human body. Our created model allows us to verification of the proposed adaptive system RLS algorithm. The functionality of the proposed non-invasive adaptive system was verified by objective parameters such as Sensitivity (S+) and Signal to Noise Ratio (SNR).

  20. Optimization of image quality and acquisition time for lab-based X-ray microtomography using an iterative reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko

    2018-05-01

    Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.

  1. Algorithms and Results of Eye Tissues Differentiation Based on RF Ultrasound

    PubMed Central

    Jurkonis, R.; Janušauskas, A.; Marozas, V.; Jegelevičius, D.; Daukantas, S.; Patašius, M.; Paunksnis, A.; Lukoševičius, A.

    2012-01-01

    Algorithms and software were developed for analysis of B-scan ultrasonic signals acquired from commercial diagnostic ultrasound system. The algorithms process raw ultrasonic signals in backscattered spectrum domain, which is obtained using two time-frequency methods: short-time Fourier and Hilbert-Huang transformations. The signals from selected regions of eye tissues are characterized by parameters: B-scan envelope amplitude, approximated spectral slope, approximated spectral intercept, mean instantaneous frequency, mean instantaneous bandwidth, and parameters of Nakagami distribution characterizing Hilbert-Huang transformation output. The backscattered ultrasound signal parameters characterizing intraocular and orbit tissues were processed by decision tree data mining algorithm. The pilot trial proved that applied methods are able to correctly classify signals from corpus vitreum blood, extraocular muscle, and orbit tissues. In 26 cases of ocular tissues classification, one error occurred, when tissues were classified into classes of corpus vitreum blood, extraocular muscle, and orbit tissue. In this pilot classification parameters of spectral intercept and Nakagami parameter for instantaneous frequencies distribution of the 1st intrinsic mode function were found specific for corpus vitreum blood, orbit and extraocular muscle tissues. We conclude that ultrasound data should be further collected in clinical database to establish background for decision support system for ocular tissue noninvasive differentiation. PMID:22654643

  2. Automated kidney morphology measurements from ultrasound images using texture and edge analysis

    NASA Astrophysics Data System (ADS)

    Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin

    2016-04-01

    In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.

  3. Robust crop and weed segmentation under uncontrolled outdoor illumination.

    PubMed

    Jeon, Hong Y; Tian, Lei F; Zhu, Heping

    2011-01-01

    An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).

  4. Application of Environmental Scanning Electron Microscope-Nanomanipulation System on Spheroplast Yeast Cells Surface Observation.

    PubMed

    Rad, Maryam Alsadat; Ahmad, Mohd Ridzuan; Nakajima, Masahiro; Kojima, Seiji; Homma, Michio; Fukuda, Toshio

    2017-01-01

    The preparation and observations of spheroplast W303 cells are described with Environmental Scanning Electron Microscope (ESEM). The spheroplasting conversion was successfully confirmed qualitatively, by the evaluation of the morphological change between the normal W303 cells and the spheroplast W303 cells, and quantitatively, by determining the spheroplast conversion percentage based on the OD 800 absorbance data. From the optical microscope observations as expected, the normal cells had an oval shape whereas spheroplast cells resemble a spherical shape. This was also confirmed under four different mediums, that is, yeast peptone-dextrose (YPD), sterile water, sorbitol-EDTA-sodium citrate buffer (SCE), and sorbitol-Tris-Hcl-CaCl 2 (CaS). It was also observed that the SCE and CaS mediums had a higher number of spheroplast cells as compared to the YPD and sterile water mediums. The OD 800 absorbance data also showed that the whole W303 cells were fully converted to the spheroplast cells after about 15 minutes. The observations of the normal and the spheroplast W303 cells were then performed under an environmental scanning electron microscope (ESEM). The normal cells showed a smooth cell surface whereas the spheroplast cells had a bleb-like surface after the loss of its integrity when removing the cell wall.

  5. ScanRanker: Quality Assessment of Tandem Mass Spectra via Sequence Tagging

    PubMed Central

    Ma, Ze-Qiang; Chambers, Matthew C.; Ham, Amy-Joan L.; Cheek, Kristin L.; Whitwell, Corbin W.; Aerni, Hans-Rudolf; Schilling, Birgit; Miller, Aaron W.; Caprioli, Richard M.; Tabb, David L.

    2011-01-01

    In shotgun proteomics, protein identification by tandem mass spectrometry relies on bioinformatics tools. Despite recent improvements in identification algorithms, a significant number of high quality spectra remain unidentified for various reasons. Here we present ScanRanker, an open-source tool that evaluates the quality of tandem mass spectra via sequence tagging with reliable performance in data from different instruments. The superior performance of ScanRanker enables it not only to find unassigned high quality spectra that evade identification through database search, but also to select spectra for de novo sequencing and cross-linking analysis. In addition, we demonstrate that the distribution of ScanRanker scores predicts the richness of identifiable spectra among multiple LC-MS/MS runs in an experiment, and ScanRanker scores assist the process of peptide assignment validation to increase confident spectrum identifications. The source code and executable versions of ScanRanker are available from http://fenchurch.mc.vanderbilt.edu. PMID:21520941

  6. Application of gamma imaging techniques for the characterisation of position sensitive gamma detectors

    NASA Astrophysics Data System (ADS)

    Habermann, T.; Didierjean, F.; Duchêne, G.; Filliger, M.; Gerl, J.; Kojouharov, I.; Li, G.; Pietralla, N.; Schaffner, H.; Sigward, M.-H.

    2017-11-01

    A device to characterize position-sensitive germanium detectors has been implemented at GSI. The main component of this so called scanning table is a gamma camera that is capable of producing online 2D images of the scanned detector by means of a PET technique. To calibrate the gamma camera Compton imaging is employed. The 2D data can be processed further offline to obtain depth information. Of main interest is the response of the scanned detector in terms of the digitized pulse shapes from the preamplifier. This is an important input for pulse-shape analysis algorithms as they are in use for gamma tracking arrays in gamma spectroscopy. To validate the scanning table, a comparison of its results with a second scanning table implemented at the IPHC Strasbourg is envisaged. For this purpose a pixelated germanium detector has been scanned.

  7. Automated Analysis of Barley Organs Using 3D Laser Scanning: An Approach for High Throughput Phenotyping

    PubMed Central

    Paulus, Stefan; Dupuis, Jan; Riedel, Sebastian; Kuhlmann, Heiner

    2014-01-01

    Due to the rise of laser scanning the 3D geometry of plant architecture is easy to acquire. Nevertheless, an automated interpretation and, finally, the segmentation into functional groups are still difficult to achieve. Two barley plants were scanned in a time course, and the organs were separated by applying a histogram-based classification algorithm. The leaf organs were represented by meshing algorithms, while the stem organs were parameterized by a least-squares cylinder approximation. We introduced surface feature histograms with an accuracy of 96% for the separation of the barley organs, leaf and stem. This enables growth monitoring in a time course for barley plants. Its reliability was demonstrated by a comparison with manually fitted parameters with a correlation R2 = 0.99 for the leaf area and R2 = 0.98 for the cumulated stem height. A proof of concept has been given for its applicability for the detection of water stress in barley, where the extension growth of an irrigated and a non-irrigated plant has been monitored. PMID:25029283

  8. Face recognition from unconstrained three-dimensional face images using multitask sparse representation

    NASA Astrophysics Data System (ADS)

    Bentaieb, Samia; Ouamri, Abdelaziz; Nait-Ali, Amine; Keche, Mokhtar

    2018-01-01

    We propose and evaluate a three-dimensional (3D) face recognition approach that applies the speeded up robust feature (SURF) algorithm to the depth representation of shape index map, under real-world conditions, using only a single gallery sample for each subject. First, the 3D scans are preprocessed, then SURF is applied on the shape index map to find interest points and their descriptors. Each 3D face scan is represented by keypoints descriptors, and a large dictionary is built from all the gallery descriptors. At the recognition step, descriptors of a probe face scan are sparsely represented by the dictionary. A multitask sparse representation classification is used to determine the identity of each probe face. The feasibility of the approach that uses the SURF algorithm on the shape index map for face identification/authentication is checked through an experimental investigation conducted on Bosphorus, University of Milano Bicocca, and CASIA 3D datasets. It achieves an overall rank one recognition rate of 97.75%, 80.85%, and 95.12%, respectively, on these datasets.

  9. SAFER vehicle inspection: a multimodal robotic sensing platform

    NASA Astrophysics Data System (ADS)

    Page, David L.; Fougerolle, Yohan; Koschan, Andreas F.; Gribok, Andrei; Abidi, Mongi A.; Gorsich, David J.; Gerhart, Grant R.

    2004-09-01

    The current threats to U.S. security both military and civilian have led to an increased interest in the development of technologies to safeguard national facilities such as military bases, federal buildings, nuclear power plants, and national laboratories. As a result, the Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at The University of Tennessee (UT) has established a research consortium, known as SAFER (Security Automation and Future Electromotive Robotics), to develop, test, and deploy sensing and imaging systems for unmanned ground vehicles (UGV). The targeted missions for these UGV systems include -- but are not limited to --under vehicle threat assessment, stand-off check-point inspections, scout surveillance, intruder detection, obstacle-breach situations, and render-safe scenarios. This paper presents a general overview of the SAFER project. Beyond this general overview, we further focus on a specific problem where we collect 3D range scans of under vehicle carriages. These scans require appropriate segmentation and representation algorithms to facilitate the vehicle inspection process. We discuss the theory for these algorithms and present results from applying them to actual vehicle scans.

  10. An Automated Road Roughness Detection from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Angelats, E.

    2017-05-01

    Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS) systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.

  11. Super-resolved Parallel MRI by Spatiotemporal Encoding

    PubMed Central

    Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio

    2016-01-01

    Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293

  12. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

  13. Pulmonary parenchyma segmentation in thin CT image sequences with spectral clustering and geodesic active contour model based on similarity

    NASA Astrophysics Data System (ADS)

    He, Nana; Zhang, Xiaolong; Zhao, Juanjuan; Zhao, Huilan; Qiang, Yan

    2017-07-01

    While the popular thin layer scanning technology of spiral CT has helped to improve diagnoses of lung diseases, the large volumes of scanning images produced by the technology also dramatically increase the load of physicians in lesion detection. Computer-aided diagnosis techniques like lesions segmentation in thin CT sequences have been developed to address this issue, but it remains a challenge to achieve high segmentation efficiency and accuracy without much involvement of human manual intervention. In this paper, we present our research on automated segmentation of lung parenchyma with an improved geodesic active contour model that is geodesic active contour model based on similarity (GACBS). Combining spectral clustering algorithm based on Nystrom (SCN) with GACBS, this algorithm first extracts key image slices, then uses these slices to generate an initial contour of pulmonary parenchyma of un-segmented slices with an interpolation algorithm, and finally segments lung parenchyma of un-segmented slices. Experimental results show that the segmentation results generated by our method are close to what manual segmentation can produce, with an average volume overlap ratio of 91.48%.

  14. Multitaper scan-free spectrum estimation using a rotational shear interferometer.

    PubMed

    Lepage, Kyle; Thomson, David J; Kraut, Shawn; Brady, David J

    2006-05-01

    Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9 degrees from a source with a SNR of 70.1, with a significance level of 10(-4), approximately 4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.

  15. Multitaper scan-free spectrum estimation using a rotational shear interferometer

    NASA Astrophysics Data System (ADS)

    Lepage, Kyle; Thomson, David J.; Kraut, Shawn; Brady, David J.

    2006-05-01

    Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9° from a source with a SNR of 70.1, with a significance level of 10-4, ˜4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.

  16. Helios: a Multi-Purpose LIDAR Simulation Framework for Research, Planning and Training of Laser Scanning Operations with Airborne, Ground-Based Mobile and Stationary Platforms

    NASA Astrophysics Data System (ADS)

    Bechtold, S.; Höfle, B.

    2016-06-01

    In many technical domains of modern society, there is a growing demand for fast, precise and automatic acquisition of digital 3D models of a wide variety of physical objects and environments. Laser scanning is a popular and widely used technology to cover this demand, but it is also expensive and complex to use to its full potential. However, there might exist scenarios where the operation of a real laser scanner could be replaced by a computer simulation, in order to save time and costs. This includes scenarios like teaching and training of laser scanning, development of new scanner hardware and scanning methods, or generation of artificial scan data sets to support the development of point cloud processing and analysis algorithms. To test the feasibility of this idea, we have developed a highly flexible laser scanning simulation framework named Heidelberg LiDAR Operations Simulator (HELIOS). HELIOS is implemented as a Java library and split up into a core component and multiple extension modules. Extensible Markup Language (XML) is used to define scanner, platform and scene models and to configure the behaviour of modules. Modules were developed and implemented for (1) loading of simulation assets and configuration (i.e. 3D scene models, scanner definitions, survey descriptions etc.), (2) playback of XML survey descriptions, (3) TLS survey planning (i.e. automatic computation of recommended scanning positions) and (4) interactive real-time 3D visualization of simulated surveys. As a proof of concept, we show the results of two experiments: First, a survey planning test in a scene that was specifically created to evaluate the quality of the survey planning algorithm. Second, a simulated TLS scan of a crop field in a precision farming scenario. The results show that HELIOS fulfills its design goals.

  17. Organ dose conversion coefficients for tube current modulated CT protocols for an adult population

    NASA Astrophysics Data System (ADS)

    Fu, Wanyi; Tian, Xiaoyu; Sahbaee, Pooyan; Zhang, Yakun; Segars, William Paul; Samei, Ehsan

    2016-03-01

    In computed tomography (CT), patient-specific organ dose can be estimated using pre-calculated organ dose conversion coefficients (organ dose normalized by CTDIvol, h factor) database, taking into account patient size and scan coverage. The conversion coefficients have been previously estimated for routine body protocol classes, grouped by scan coverage, across an adult population for fixed tube current modulated CT. The coefficients, however, do not include the widely utilized tube current (mA) modulation scheme, which significantly impacts organ dose. This study aims to extend the h factors and the corresponding dose length product (DLP) to create effective dose conversion coefficients (k factor) database incorporating various tube current modulation strengths. Fifty-eight extended cardiac-torso (XCAT) phantoms were included in this study representing population anatomy variation in clinical practice. Four mA profiles, representing weak to strong mA dependency on body attenuation, were generated for each phantom and protocol class. A validated Monte Carlo program was used to simulate the organ dose. The organ dose and effective dose was further normalized by CTDIvol and DLP to derive the h factors and k factors, respectively. The h factors and k factors were summarized in an exponential regression model as a function of body size. Such a population-based mathematical model can provide a comprehensive organ dose estimation given body size and CTDIvol. The model was integrated into an iPhone app XCATdose version 2, enhancing the 1st version based upon fixed tube current modulation. With the organ dose calculator, physicists, physicians, and patients can conveniently estimate organ dose.

  18. Image quality in low-dose coronary computed tomography angiography with a new high-definition CT scanner.

    PubMed

    Kazakauskaite, Egle; Husmann, Lars; Stehli, Julia; Fuchs, Tobias; Fiechter, Michael; Klaeser, Bernd; Ghadri, Jelena R; Gebhard, Catherine; Gaemperli, Oliver; Kaufmann, Philipp A

    2013-02-01

    A new generation of high definition computed tomography (HDCT) 64-slice devices complemented by a new iterative image reconstruction algorithm-adaptive statistical iterative reconstruction, offer substantially higher resolution compared to standard definition CT (SDCT) scanners. As high resolution confers higher noise we have compared image quality and radiation dose of coronary computed tomography angiography (CCTA) from HDCT versus SDCT. Consecutive patients (n = 93) underwent HDCT, and were compared to 93 patients who had previously undergone CCTA with SDCT matched for heart rate (HR), HR variability and body mass index (BMI). Tube voltage and current were adapted to the patient's BMI, using identical protocols in both groups. The image quality of all CCTA scans was evaluated by two independent readers in all coronary segments using a 4-point scale (1, excellent image quality; 2, blurring of the vessel wall; 3, image with artefacts but evaluative; 4, non-evaluative). Effective radiation dose was calculated from DLP multiplied by a conversion factor (0.014 mSv/mGy × cm). The mean image quality score from HDCT versus SDCT was comparable (2.02 ± 0.68 vs. 2.00 ± 0.76). Mean effective radiation dose did not significantly differ between HDCT (1.7 ± 0.6 mSv, range 1.0-3.7 mSv) and SDCT (1.9 ± 0.8 mSv, range 0.8-5.5 mSv; P = n.s.). HDCT scanners allow low-dose 64-slice CCTA scanning with higher resolution than SDCT but maintained image quality and equally low radiation dose. Whether this will translate into higher accuracy of HDCT for CAD detection remains to be evaluated.

  19. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm

    PubMed Central

    Yan, Li; Xie, Hong; Chen, Changjun

    2017-01-01

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100

  20. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.

    PubMed

    Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun

    2017-08-29

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.

Top