Sample records for initial image processing

  1. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review

    PubMed Central

    Sheridan, Heather; Reingold, Eyal M.

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise. PMID:29033865

  2. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise.

  3. Digital imaging technology assessment: Digital document storage project

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An ongoing technical assessment and requirements definition project is examining the potential role of digital imaging technology at NASA's STI facility. The focus is on the basic components of imaging technology in today's marketplace as well as the components anticipated in the near future. Presented is a requirement specification for a prototype project, an initial examination of current image processing at the STI facility, and an initial summary of image processing projects at other sites. Operational imaging systems incorporate scanners, optical storage, high resolution monitors, processing nodes, magnetic storage, jukeboxes, specialized boards, optical character recognition gear, pixel addressable printers, communications, and complex software processes.

  4. Processing Translational Motion Sequences.

    DTIC Science & Technology

    1982-10-01

    the initial ROADSIGN image using a (del)**2g mask with a width of 5 pixels The distinctiveness values were computed using features which were 5x5 pixel...the initial step size of the local search quite large. 34 4. EX P R g NTg The following experiments were performed using the roadsign and industrial...the initial image of the sequence. The third experiment involves processing the roadsign image sequence using the features extracted at the positions

  5. Automated imaging system for single molecules

    DOEpatents

    Schwartz, David Charles; Runnheim, Rodney; Forrest, Daniel

    2012-09-18

    There is provided a high throughput automated single molecule image collection and processing system that requires minimal initial user input. The unique features embodied in the present disclosure allow automated collection and initial processing of optical images of single molecules and their assemblies. Correct focus may be automatically maintained while images are collected. Uneven illumination in fluorescence microscopy is accounted for, and an overall robust imaging operation is provided yielding individual images prepared for further processing in external systems. Embodiments described herein are useful in studies of any macromolecules such as DNA, RNA, peptides and proteins. The automated image collection and processing system and method of same may be implemented and deployed over a computer network, and may be ergonomically optimized to facilitate user interaction.

  6. Initial Navigation Alignment of Optical Instruments on GOES-R

    NASA Astrophysics Data System (ADS)

    Isaacson, P.; DeLuccia, F.; Reth, A. D.; Igli, D. A.; Carter, D.

    2016-12-01

    The GOES-R satellite is the first in NOAA's next-generation series of geostationary weather satellites. In addition to a number of space weather sensors, it will carry two principal optical earth-observing instruments, the Advanced Baseline Imager (ABI) and the Geostationary Lightning Mapper (GLM). During launch, currently scheduled for November of 2016, the alignment of these optical instruments is anticipated to shift from that measured during pre-launch characterization. While both instruments have image navigation and registration (INR) processing algorithms to enable automated geolocation of the collected data, the launch-derived misalignment may be too large for these approaches to function without an initial adjustment to calibration parameters. The parameters that may require adjustment are for Line of Sight Motion Compensation (LMC), and the adjustments will be estimated on orbit during the post-launch test (PLT) phase. We have developed approaches to estimate the initial alignment errors for both ABI and GLM image products. Our approaches involve comparison of ABI and GLM images collected during PLT to a set of reference ("truth") images using custom image processing tools and other software (the INR Performance Assessment Tool Set, or "IPATS") being developed for other INR assessments of ABI and GLM data. IPATS is based on image correlation approaches to determine offsets between input and reference images, and these offsets are the fundamental input to our estimate of the initial alignment errors. Initial testing of our alignment algorithms on proxy datasets lends high confidence that their application will determine the initial alignment errors to within sufficient accuracy to enable the operational INR processing approaches to proceed in a nominal fashion. We will report on the algorithms, implementation approach, and status of these initial alignment tools being developed for the GOES-R ABI and GLM instruments.

  7. Novel Diffusion-Weighted MRI for High-Grade Prostate Cancer Detection

    DTIC Science & Technology

    2016-10-01

    in image resolution and scale.This process is critical for evaluating new imaging modalities.Our initial findings illustrate the potential of the...eligible for analysis as determined by adequate pathologic processing and MR images deemed to be of adequate quality by the study team.  The...histology samples have been requested from the UIC biorepository for digitization  All MR images have been collected and prepared for image processing

  8. Imaging initial formation processes of nanobubbles at the graphite-water interface through high-speed atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Liao, Hsien-Shun; Yang, Chih-Wen; Ko, Hsien-Chen; Hwu, En-Te; Hwang, Ing-Shouh

    2018-03-01

    The initial formation process of nanobubbles at solid-water interfaces remains unclear because of the limitations of current imaging techniques. To directly observe the formation process, an astigmatic high-speed atomic force microscope (AFM) was modified to enable imaging in the liquid environment. By using a customized cantilever holder, the resonance of small cantilevers was effectively enhanced in water. The proposed high-speed imaging technique yielded highly dynamic quasi-two-dimensional (2D) gas structures (thickness: 20-30 nm) initially at the graphite-water interface. The 2D structures were laterally mobile mainly within certain areas, but occasionally a gas structure might extensively migrate and settle in a new area. The 2D structures were often confined by substrate step edges in one lateral dimension. Eventually, all quasi-2D gas structures were transformed into cap-shaped nanobubbles of higher heights and reduced lateral dimensions. These nanobubbles were immobile and remained stable under continuous AFM imaging. This study demonstrated that nanobubbles could be stably imaged at a scan rate of 100 lines per second (640 μm/s).

  9. Initial Results from Fitting Resolved Modes using HMI Intensity Observations

    NASA Astrophysics Data System (ADS)

    Korzennik, Sylvain G.

    2017-08-01

    The HMI project recently started processing the continuum intensity images following global helioseismology procedures similar to those used to process the velocity images. The spatial decomposition of these images has produced time series of spherical harmonic coefficients for degrees up to l=300, using a different apodization than the one used for velocity observations. The first 360 days of observations were processed and made available. I present initial results from fitting these time series using my state of the art fitting methodology and compare the derived mode characteristics to those estimated using co-eval velocity observations.

  10. Addressing the coming radiology crisis-the Society for Computer Applications in Radiology transforming the radiological interpretation process (TRIP) initiative.

    PubMed

    Andriole, Katherine P; Morin, Richard L; Arenson, Ronald L; Carrino, John A; Erickson, Bradley J; Horii, Steven C; Piraino, David W; Reiner, Bruce I; Seibert, J Anthony; Siegel, Eliot

    2004-12-01

    The Society for Computer Applications in Radiology (SCAR) Transforming the Radiological Interpretation Process (TRIP) Initiative aims to spearhead research, education, and discovery of innovative solutions to address the problem of information and image data overload. The initiative will foster interdisciplinary research on technological, environmental and human factors to better manage and exploit the massive amounts of data. TRIP will focus on the following basic objectives: improving the efficiency of interpretation of large data sets, improving the timeliness and effectiveness of communication, and decreasing medical errors. The ultimate goal of the initiative is to improve the quality and safety of patient care. Interdisciplinary research into several broad areas will be necessary to make progress in managing the ever-increasing volume of data. The six concepts involved are human perception, image processing and computer-aided detection (CAD), visualization, navigation and usability, databases and integration, and evaluation and validation of methods and performance. The result of this transformation will affect several key processes in radiology, including image interpretation; communication of imaging results; workflow and efficiency within the health care enterprise; diagnostic accuracy and a reduction in medical errors; and, ultimately, the overall quality of care.

  11. Determination of differential cross sections and kinetic energy release of co-products from central sliced images in photo-initiated dynamic processes.

    PubMed

    Chen, Kuo-mei; Chen, Yu-wei

    2011-04-07

    For photo-initiated inelastic and reactive collisions, dynamic information can be extracted from central sliced images of state-selected Newton spheres of product species. An analysis framework has been established to determine differential cross sections and the kinetic energy release of co-products from experimental images. When one of the reactants exhibits a high recoil speed in a photo-initiated dynamic process, the present theory can be employed to analyze central sliced images from ion imaging or three-dimensional sliced fluorescence imaging experiments. It is demonstrated that the differential cross section of a scattering process can be determined from the central sliced image by a double Legendre moment analysis, for either a fixed or continuously distributed recoil speeds in the center-of-mass reference frame. Simultaneous equations which lead to the determination of the kinetic energy release of co-products can be established from the second-order Legendre moment of the experimental image, as soon as the differential cross section is extracted. The intensity distribution of the central sliced image, along with its outer and inner ring sizes, provide all the clues to decipher the differential cross section and the kinetic energy release of co-products.

  12. Developing an ANSI standard for image quality tools for the testing of active millimeter wave imaging systems

    NASA Astrophysics Data System (ADS)

    Barber, Jeffrey; Greca, Joseph; Yam, Kevin; Weatherall, James C.; Smith, Peter R.; Smith, Barry T.

    2017-05-01

    In 2016, the millimeter wave (MMW) imaging community initiated the formation of a standard for millimeter wave image quality metrics. This new standard, American National Standards Institute (ANSI) N42.59, will apply to active MMW systems for security screening of humans. The Electromagnetic Signatures of Explosives Laboratory at the Transportation Security Laboratory is supporting the ANSI standards process via the creation of initial prototypes for round-robin testing with MMW imaging system manufacturers and experts. Results obtained for these prototypes will be used to inform the community and lead to consensus objective standards amongst stakeholders. Images collected with laboratory systems are presented along with results of preliminary image analysis. Future directions for object design, data collection and image processing are discussed.

  13. Diversity of the Marine Cyanobacterium Trichodesmium: Characterization of the Woods Hole Culture Collection and Quantification of Field Populations

    DTIC Science & Technology

    2009-09-01

    using her beadbeater, Sonya Dyhrman for being my initial biology advisor, Heidi Sosik for her advice on image processing , the residents of Watson...64 2-17 Phycobiliprotein absorption spectra . . . . . . . . . . . . . . . . . . . . . 66 3-1 Image processing for automated cell counts...digital camera and Axiovision 4.6.3 software. Images were measured, and cell metrics were determined using the MATLAB image processing toolbox

  14. Automation of Cassini Support Imaging Uplink Command Development

    NASA Technical Reports Server (NTRS)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  15. RayPlus: a Web-Based Platform for Medical Image Processing.

    PubMed

    Yuan, Rong; Luo, Ming; Sun, Zhi; Shi, Shuyue; Xiao, Peng; Xie, Qingguo

    2017-04-01

    Medical image can provide valuable information for preclinical research, clinical diagnosis, and treatment. As the widespread use of digital medical imaging, many researchers are currently developing medical image processing algorithms and systems in order to accommodate a better result to clinical community, including accurate clinical parameters or processed images from the original images. In this paper, we propose a web-based platform to present and process medical images. By using Internet and novel database technologies, authorized users can easily access to medical images and facilitate their workflows of processing with server-side powerful computing performance without any installation. We implement a series of algorithms of image processing and visualization in the initial version of Rayplus. Integration of our system allows much flexibility and convenience for both research and clinical communities.

  16. Automatic initialization and quality control of large-scale cardiac MRI segmentations.

    PubMed

    Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F

    2018-01-01

    Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization.

    PubMed

    Wang, Jianing; Liu, Yuan; Noble, Jack H; Dawant, Benoit M

    2017-10-01

    Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.

  18. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  19. DIRECT IMAGE PROCESSING OF CORRODING SURFACES APPLIED TO FRICTION STIR WELDING.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ISAACS,H.S.ET AL.

    An in situ process for visually locating corrosion is presented. The process visually displays image differences obtained by subtracting one digitized image from another. The difference image shows only where changes have taken place during period between the recording of the two images. Changes are due to both corrosion attack of the surface and concentration changes of dissolved corrosion products in solution. Indicators added to the solution assist by decorating sites of corrosion as diffusion and convection of the dissolved products increase the size of the affected region. A study of the initial stages of corrosion of a friction stirmore » welded Al alloy 7075 has been performed using this imaging technique. Pitting potential measurements suggest that there was an initial increased sensitivity to corrosion. The difference image technique demonstrated that it was due to a reformation of the passive film that occurs with Zn containing Al alloys which occurs preferentially along flow protected regions. The most susceptible region of the weld was found to be where both limited deformation and thermal transients are produced during welding.« less

  20. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878

  1. Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks.

    PubMed

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-11-26

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.

  2. Initial Navigation Alignment of Optical Instruments on GOES-R

    NASA Technical Reports Server (NTRS)

    Isaacson, Peter J.; DeLuccia, Frank J.; Reth, Alan D.; Igli, David A.; Carter, Delano R.

    2016-01-01

    Post-launch alignment errors for the Advanced Baseline Imager (ABI) and Geospatial Lightning Mapper (GLM) on GOES-R may be too large for the image navigation and registration (INR) processing algorithms to function without an initial adjustment to calibration parameters. We present an approach that leverages a combination of user-selected image-to-image tie points and image correlation algorithms to estimate this initial launch-induced offset and calculate adjustments to the Line of Sight Motion Compensation (LMC) parameters. We also present an approach to generate synthetic test images, to which shifts and rotations of known magnitude are applied. Results of applying the initial alignment tools to a subset of these synthetic test images are presented. The results for both ABI and GLM are within the specifications established for these tools, and indicate that application of these tools during the post-launch test (PLT) phase of GOES-R operations will enable the automated INR algorithms for both instruments to function as intended.

  3. Eye vergence responses during a visual memory task.

    PubMed

    Solé Puig, Maria; Romeo, August; Cañete Crespillo, Jose; Supèr, Hans

    2017-02-08

    In a previous report it was shown that covertly attending visual stimuli produce small convergence of the eyes, and that visual stimuli can give rise to different modulations of the angle of eye vergence, depending on their power to capture attention. Working memory is highly dependent on attention. Therefore, in this study we assessed vergence responses in a memory task. Participants scanned a set of 8 or 12 images for 10 s, and thereafter were presented with a series of single images. One half were repeat images - that is, they belonged to the initial set - and the other half were novel images. Participants were asked to indicate whether or not the images were included in the initial image set. We observed that eyes converge during scanning the set of images and during the presentation of the single images. The convergence was stronger for remembered images compared with the vergence for nonremembered images. Modulation in pupil size did not correspond to behavioural responses. The correspondence between vergence and coding/retrieval processes of memory strengthen the idea of a role for vergence in attention processing of visual information.

  4. An open data mining framework for the analysis of medical images: application on obstructive nephropathy microscopy images.

    PubMed

    Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias

    2010-01-01

    This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.

  5. BgCut: automatic ship detection from UAV images.

    PubMed

    Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong

    2014-01-01

    Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.

  6. BgCut: Automatic Ship Detection from UAV Images

    PubMed Central

    Zhang, Zhengning; Feng, Zhiyong

    2014-01-01

    Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches. PMID:24977182

  7. Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data

    NASA Astrophysics Data System (ADS)

    Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.

    2015-04-01

    In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.

  8. Transport Imaging in the One Dimensional Limit

    DTIC Science & Technology

    2006-06-01

    Spatial luminescence from single bottom-up GaN and ZnO nanowires deposited by metal initiated metal -organic CVD on Au and SiO2 substrates is imaged. CL...this thesis were deposited by metal initiated metal -organic CVD on Au and SiO2 substrates . The process was carried out with different reagents in...are reported. Spatial luminescence from single bottom-up GaN and ZnO nanowires deposited by metal initiated metal -organic CVD on Au and SiO2

  9. An automatic segmentation method of a parameter-adaptive PCNN for medical images.

    PubMed

    Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide

    2017-09-01

    Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.

  10. Neural Correlates of Top-Down Letter Processing

    ERIC Educational Resources Information Center

    Liu, Jiangang; Li, Jun; Zhang, Hongchuan; Rieth, Cory A.; Huber, David E.; Li, Wu; Lee, Kang; Tian, Jie

    2010-01-01

    This fMRI study investigated top-down letter processing with an illusory letter detection task. Participants responded whether one of a number of different possible letters was present in a very noisy image. After initial training that became increasingly difficult, they continued to detect letters even though the images consisted of pure noise,…

  11. Fast perceptual image hash based on cascade algorithm

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly; Yavtushenko, Evgeniya

    2017-09-01

    In this paper, we propose a perceptual image hash algorithm based on cascade algorithm, which can be applied in image authentication, retrieval, and indexing. Image perceptual hash uses for image retrieval in sense of human perception against distortions caused by compression, noise, common signal processing and geometrical modifications. The main disadvantage of perceptual hash is high time expenses. In the proposed cascade algorithm of image retrieval initializes with short hashes, and then a full hash is applied to the processed results. Computer simulation results show that the proposed hash algorithm yields a good performance in terms of robustness, discriminability, and time expenses.

  12. A web-based computer aided system for liver surgery planning: initial implementation on RayPlus

    NASA Astrophysics Data System (ADS)

    Luo, Ming; Yuan, Rong; Sun, Zhi; Li, Tianhong; Xie, Qingguo

    2016-03-01

    At present, computer aided systems for liver surgery design and risk evaluation are widely used in clinical all over the world. However, most systems are local applications that run on high-performance workstations, and the images have to processed offline. Compared with local applications, a web-based system is accessible anywhere and for a range of regardless of relative processing power or operating system. RayPlus (http://rayplus.life.hust.edu.cn), a B/S platform for medical image processing, was developed to give a jump start on web-based medical image processing. In this paper, we implement a computer aided system for liver surgery planning on the architecture of RayPlus. The system consists of a series of processing to CT images including filtering, segmentation, visualization and analyzing. Each processing is packaged into an executable program and runs on the server side. CT images in DICOM format are processed step by to interactive modeling on browser with zero-installation and server-side computing. The system supports users to semi-automatically segment the liver, intrahepatic vessel and tumor from the pre-processed images. Then, surface and volume models are built to analyze the vessel structure and the relative position between adjacent organs. The results show that the initial implementation meets satisfactorily its first-order objectives and provide an accurate 3D delineation of the liver anatomy. Vessel labeling and resection simulation are planned to add in the future. The system is available on Internet at the link mentioned above and an open username for testing is offered.

  13. Object segmentation using graph cuts and active contours in a pyramidal framework

    NASA Astrophysics Data System (ADS)

    Subudhi, Priyambada; Mukhopadhyay, Susanta

    2018-03-01

    Graph cuts and active contours are two very popular interactive object segmentation techniques in the field of computer vision and image processing. However, both these approaches have their own well-known limitations. Graph cut methods perform efficiently giving global optimal segmentation result for smaller images. However, for larger images, huge graphs need to be constructed which not only takes an unacceptable amount of memory but also increases the time required for segmentation to a great extent. On the other hand, in case of active contours, initial contour selection plays an important role in the accuracy of the segmentation. So a proper selection of initial contour may improve the complexity as well as the accuracy of the result. In this paper, we have tried to combine these two approaches to overcome their above-mentioned drawbacks and develop a fast technique of object segmentation. Here, we have used a pyramidal framework and applied the mincut/maxflow algorithm on the lowest resolution image with the least number of seed points possible which will be very fast due to the smaller size of the image. Then, the obtained segmentation contour is super-sampled and and worked as the initial contour for the next higher resolution image. As the initial contour is very close to the actual contour, so fewer number of iterations will be required for the convergence of the contour. The process is repeated for all the high-resolution images and experimental results show that our approach is faster as well as memory efficient as compare to both graph cut or active contour segmentation alone.

  14. Automatic rice crop height measurement using a field server and digital image processing.

    PubMed

    Sritarapipat, Tanakorn; Rakwatin, Preesan; Kasetkasem, Teerasit

    2014-01-07

    Rice crop height is an important agronomic trait linked to plant type and yield potential. This research developed an automatic image processing technique to detect rice crop height based on images taken by a digital camera attached to a field server. The camera acquires rice paddy images daily at a consistent time of day. The images include the rice plants and a marker bar used to provide a height reference. The rice crop height can be indirectly measured from the images by measuring the height of the marker bar compared to the height of the initial marker bar. Four digital image processing steps are employed to automatically measure the rice crop height: band selection, filtering, thresholding, and height measurement. Band selection is used to remove redundant features. Filtering extracts significant features of the marker bar. The thresholding method is applied to separate objects and boundaries of the marker bar versus other areas. The marker bar is detected and compared with the initial marker bar to measure the rice crop height. Our experiment used a field server with a digital camera to continuously monitor a rice field located in Suphanburi Province, Thailand. The experimental results show that the proposed method measures rice crop height effectively, with no human intervention required.

  15. CT Imaging, Data Reduction, and Visualization of Hardwood Logs

    Treesearch

    Daniel L. Schmoldt

    1996-01-01

    Computer tomography (CT) is a mathematical technique that, combined with noninvasive scanning such as x-ray imaging, has become a powerful tool to nondestructively test materials prior to use or to evaluate materials prior to processing. In the current context, hardwood lumber processing can benefit greatly by knowing what a log looks like prior to initial breakdown....

  16. Status report: Data management program algorithm evaluation activity at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1977-01-01

    An algorithm evaluation activity was initiated to study the problems associated with image processing by assessing the independent and interdependent effects of registration, compression, and classification techniques on LANDSAT data for several discipline applications. The objective of the activity was to make recommendations on selected applicable image processing algorithms in terms of accuracy, cost, and timeliness or to propose alternative ways of processing the data. As a means of accomplishing this objective, an Image Coding Panel was established. The conduct of the algorithm evaluation is described.

  17. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.more » Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. Conclusions: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.« less

  18. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  19. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  20. Analysis of ROC on chest direct digital radiography (DR) after image processing in diagnosis of SARS

    NASA Astrophysics Data System (ADS)

    Lv, Guozheng; Lan, Rihui; Zeng, Qingsi; Zheng, Zhong

    2004-05-01

    The Severe Acute Respiratory Syndrome (SARS, also called Infectious Atypical Pneumonia), which initially broke out in late 2002, has threatened the public"s health seriously. How to confirm the patients contracting SARS becomes an urgent issue in diagnosis. This paper intends to evaluate the importance of Image Processing in the diagnosis on SARS at the early stage. Receiver Operating Characteristics (ROC) analysis has been employed in this study to compare the value of DR images in the diagnosis of SARS patients before and after image processing by Symphony Software supplied by E-Com Technology Ltd., and DR image study of 72 confirmed or suspected SARS patients were reviewed respectively. All the images taken from the studied patients were processed by Symphony. Both the original and processed images were taken into ROC analysis, based on which the ROC graph for each group of images has been produced as described below: For processed images: a = 1.9745, b = 1.4275, SA = 0.8714; For original images: a = 0.9066, b = 0.8310, SA = 0.7572; (a - intercept, b - slop, SA - Area below the curve). The result shows significant difference between the original images and processed images (P<0.01). In summary, the images processed by Symphony are superior to the original ones in detecting the opacity lesion, and increases the accuracy of SARS diagnosis.

  1. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  2. Imaging Girls: Visual Methodologies and Messages for Girls' Education

    ERIC Educational Resources Information Center

    Magno, Cathryn; Kirk, Jackie

    2008-01-01

    This article describes the use of visual methodologies to examine images of girls used by development agencies to portray and promote their work in girls' education, and provides a detailed discussion of three report cover images. It details the processes of methodology and tool development for the visual analysis and presents initial 'readings'…

  3. Using eye movements to investigate selective attention in chronic daily headache.

    PubMed

    Liossi, Christina; Schoth, Daniel E; Godwin, Hayward J; Liversedge, Simon P

    2014-03-01

    Previous research has demonstrated that chronic pain is associated with biased processing of pain-related information. Most studies have examined this bias by measuring response latencies. The present study extended previous work by recording eye movement behaviour in individuals with chronic headache and in healthy controls while participants viewed a set of images (i.e., facial expressions) from 4 emotion categories (pain, angry, happy, neutral). Biases in initial orienting were assessed from the location of the initial shift in gaze, and biases in the maintenance of attention were assessed from the duration of gaze on the picture that was initially fixated, and the mean number of visits, and mean fixation duration per image category. The eye movement behaviour of the participants in the chronic headache group was characterised by a bias in initial shift of orienting to pain. There was no evidence of individuals with chronic headache visiting more often, or spending significantly more time viewing, pain images compared to other images. Both participant groups showed a significantly greater bias to maintain gaze longer on happy images, relative to pain, angry, and neutral images. Results are consistent with a pain-related bias that operates in the orienting of attention on pain-related stimuli, and suggest that chronic pain participants' attentional biases for pain-related information are evident even when other emotional stimuli are present. Pain-related information-processing biases appear to be a robust feature of chronic pain and may have an important role in the maintenance of the disorder. Copyright © 2013 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  4. Optical noise-free image encryption based on quick response code and high dimension chaotic system in gyrator transform domain

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Xu, Minjie; Tian, Ailing

    2017-04-01

    A novel optical image encryption scheme is proposed based on quick response code and high dimension chaotic system, where only the intensity distribution of encoded information is recorded as ciphertext. Initially, the quick response code is engendered from the plain image and placed in the input plane of the double random phase encoding architecture. Then, the code is encrypted to the ciphertext with noise-like distribution by using two cascaded gyrator transforms. In the process of encryption, the parameters such as rotation angles and random phase masks are generated as interim variables and functions based on Chen system. A new phase retrieval algorithm is designed to reconstruct the initial quick response code in the process of decryption, in which a priori information such as three position detection patterns is used as the support constraint. The original image can be obtained without any energy loss by scanning the decrypted code with mobile devices. The ciphertext image is the real-valued function which is more convenient for storing and transmitting. Meanwhile, the security of the proposed scheme is enhanced greatly due to high sensitivity of initial values of Chen system. Extensive cryptanalysis and simulation have performed to demonstrate the feasibility and effectiveness of the proposed scheme.

  5. Woods Hole Image Processing System Software implementation; using NetCDF as a software interface for image processing

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.

  6. Low-cost digital image processing at the University of Oklahoma

    NASA Technical Reports Server (NTRS)

    Harrington, J. A., Jr.

    1981-01-01

    Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.

  7. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  8. Detecting Edges in Images by Use of Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2003-01-01

    A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images

  9. Ground based interferometric radar initial look at Longview, Blue Springs, Tuttle Creek, and Milford Dams

    NASA Astrophysics Data System (ADS)

    Deng, Huazeng

    Measuring millimeter and smaller deformation has been demonstrated in the literature using RADAR. To address in part the limitations in current commercial satellite-based SAR datasets, a University of Missouri (MU) team worked with GAMMA Remote Sensing to develop a specialized (dual-frequency, polarimetric, and interferometric) ground-based real-aperture RADAR (GBIR) instrument. The GBIR device is portable with its tripod system and control electronics. It can be deployed to obtain data with high spatial resolution (i.e. on the order of 1 meter) and high temporal resolution (i.e. on the order 1 minute). The high temporal resolution is well suited for measurements of rapid deformation. From the same geodetic position, the GBIR may collect dual frequency data set using C-band and Ku-band. The overall goal of this project is to measure the deformation from various scenarios by applying the GBIR system. Initial efforts have been focusing on testing the system performance on different types of targets. This thesis details a number of my efforts on experimental and processing activities at the start of the MU GBIR imaging project. For improved close range capability, a wideband dual polarized antenna option was produced and tested. For GBIR calibration, several trihedral corner reflectors were designed and fabricated. In addition to experimental activities and site selection, I participated in advanced data processing activities. I processed GBIR data in several ways including single-look-complex (SLC) image generation, imagery registration, and interferometric processing. A number of initial-processed GBIR image products are presented from four dams: Longview, Blue Springs, Tuttle Creek, and Milford. Excellent imaging performance of the MU GBIR has been observed for various target types such as riprap, concrete, soil, rock, metal, and vegetation. Strong coherence of the test scene has been observed in the initial interferograms.

  10. 3D shape recovery of smooth surfaces: dropping the fixed-viewpoint assumption.

    PubMed

    Moses, Yael; Shimshoni, Ilan

    2009-07-01

    We present a new method for recovering the 3D shape of a featureless smooth surface from three or more calibrated images illuminated by different light sources (three of them are independent). This method is unique in its ability to handle images taken from unconstrained perspective viewpoints and unconstrained illumination directions. The correspondence between such images is hard to compute and no other known method can handle this problem locally from a small number of images. Our method combines geometric and photometric information in order to recover dense correspondence between the images and accurately computes the 3D shape. Only a single pass starting at one point and local computation are used. This is in contrast to methods that use the occluding contours recovered from many images to initialize and constrain an optimization process. The output of our method can be used to initialize such processes. In the special case of fixed viewpoint, the proposed method becomes a new perspective photometric stereo algorithm. Nevertheless, the introduction of the multiview setup, self-occlusions, and regions close to the occluding boundaries are better handled, and the method is more robust to noise than photometric stereo. Experimental results are presented for simulated and real images.

  11. Remote sensing of a dynamic sub-arctic peatland reservoir using optical and synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Larter, Jarod Lee

    Stephens Lake, Manitoba is an example of a peatland reservoir that has undergone physical changes related to mineral erosion and peatland disintegration processes since its initial impoundment. In this thesis I focused on the processes of peatland upheaval, transport, and disintegration as the primary drivers of dynamic change within the reservoir. The changes related to these processes are most frequent after initial reservoir impoundment and decline over time. They continue to occur over 35 years after initial flooding. I developed a remote sensing approach that employs both optical and microwave sensors for discriminating land (Le. floating peatlands, forested land, and barren land) from open water within the reservoir. High spatial resolution visible and near-infrared (VNIR) optical data obtained from the QuickBird satellite, and synthetic aperture radar (SAR) microwave data obtained from the RADARSAT-1 satellite were implemented. The approach was facilitated with a Geographic Information System (GIS) based validation map for the extraction of optical and SAR pixel data. Each sensor's extracted data set was first analyzed separately using univariate and multivariate statistical methods to determine the discriminant ability of each sensor. The initial analyses were followed by an integrated sensor approach; the development of an image classification model; and a change detection analysis. Results showed excellent (> 95%) classification accuracy using QuickBird satellite image data. Discrimination and classification of studied land cover classes using SAR image texture data resulted in lower overall classification accuracies (˜ 60%). SAR data classification accuracy improved to > 90% when classifying only land and water, demonstrating SAR's utility as a land and water mapping tool. An integrated sensor data approach showed no considerable improvement over the use of optical satellite image data alone. An image classification model was developed that could be used to map both detailed land cover classes and the land and water interface within the reservoir. Change detection analysis over a seven year period indicated that physical changes related to mineral erosion, peatland upheaval, transport, and disintegration, and operational water level variation continue to take place in the reservoir some 35 years after initial flooding. This thesis demonstrates the ability of optical and SAR satellite image remote sensing data sets to be used in an operational context for the routine discrimination of the land and water boundaries within a dynamic peatland reservoir. Future monitoring programs would benefit most from a complementary image acquisition program in which SAR images, known for their acquisition reliability under cloud cover, are acquired along with optical images given their ability to discriminate land cover classes in greater detail.

  12. High-Resolution Large-Field-of-View Ultrasound Breast Imager

    DTIC Science & Technology

    2013-06-01

    record the display of the AO detector for image processing and storage. The measured resolution is 400 microns. • The noise present in the imaging...l T 4 O igure 7: (Le n cyst thickn ask 3: Inco .a. Incorpor ensitivity (U e have not ideo camera enses. ask 4: Desi .a. Determin ur initial pl

  13. New Topographic Maps of Io Using Voyager and Galileo Stereo Imaging and Photoclinometry

    NASA Astrophysics Data System (ADS)

    White, O. L.; Schenk, P. M.; Hoogenboom, T.

    2012-03-01

    Stereo and photoclinometry processing have been applied to Voyager and Galileo images of Io in order to derive regional- and local-scale topographic maps of 20% of the moon’s surface to date. We present initial mapping results.

  14. Documentation of procedures for textural/spatial pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.; Bryant, W. F.

    1976-01-01

    A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.

  15. Comparison of magnetic resonance imaging and computed tomography in suspected lesions in the posterior cranial fossa.

    PubMed Central

    Teasdale, G. M.; Hadley, D. M.; Lawrence, A.; Bone, I.; Burton, H.; Grant, R.; Condon, B.; Macpherson, P.; Rowan, J.

    1989-01-01

    OBJECTIVE--To compare computed tomography and magnetic resonance imaging in investigating patients suspected of having a lesion in the posterior cranial fossa. DESIGN--Randomised allocation of newly referred patients to undergo either computed tomography or magnetic resonance imaging; the alternative investigation was performed subsequently only in response to a request from the referring doctor. SETTING--A regional neuroscience centre serving 2.7 million. PATIENTS--1020 Patients recruited between April 1986 and December 1987, all suspected by neurologists, neurosurgeons, or other specialists of having a lesion in the posterior fossa and referred for neuroradiology. The groups allocated to undergo computed tomography or magnetic resonance imaging were well matched in distributions of age, sex, specialty of referring doctor, investigation as an inpatient or an outpatient, suspected site of lesion, and presumed disease process; the referring doctor's confidence in the initial clinical diagnosis was also similar. INTERVENTIONS--After the patients had been imaged by either computed tomography or magnetic resonance (using a resistive magnet of 0.15 T) doctors were given the radiologist's report and a form asking if they considered that imaging with the alternative technique was necessary and, if so, why; it also asked for their current diagnoses and their confidence in them. MAIN OUTCOME MEASURES--Number of requests for the alternative method of investigation. Assessment of characteristics of patients for whom further imaging was requested and lesions that were suspected initially and how the results of the second imaging affected clinicians' and radiologists' opinions. RESULTS--Ninety three of the 501 patients who initially underwent computed tomography were referred subsequently for magnetic resonance imaging whereas only 28 of the 493 patients who initially underwent magnetic resonance imaging were referred subsequently for computed tomography. Over the study the number of patients referred for magnetic resonance imaging after computed tomography increased but requests for computed tomography after magnetic resonance imaging decreased. The reason that clinicians gave most commonly for requesting further imaging by magnetic resonance was that the results of the initial computed tomography failed to exclude their suspected diagnosis (64 patients). This was less common in patients investigated initially by magnetic resonance imaging (eight patients). Management of 28 patients (6%) imaged initially with computed tomography and 12 patients (2%) imaged initially with magnetic resonance was changed on the basis of the results of the alternative imaging. CONCLUSIONS--Magnetic resonance imaging provided doctors with the information required to manage patients suspected of having a lesion in the posterior fossa more commonly than computed tomography, but computed tomography alone was satisfactory in 80% of cases... PMID:2506965

  16. 3D Texture Features Mining for MRI Brain Tumor Identification

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra

    2014-03-01

    Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.

  17. Hybrid active contour model for inhomogeneous image segmentation with background estimation

    NASA Astrophysics Data System (ADS)

    Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun

    2018-03-01

    This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.

  18. Macula segmentation and fovea localization employing image processing and heuristic based clustering for automated retinal screening.

    PubMed

    R, GeethaRamani; Balasubramanian, Lakshmi

    2018-07-01

    Macula segmentation and fovea localization is one of the primary tasks in retinal analysis as they are responsible for detailed vision. Existing approaches required segmentation of retinal structures viz. optic disc and blood vessels for this purpose. This work avoids knowledge of other retinal structures and attempts data mining techniques to segment macula. Unsupervised clustering algorithm is exploited for this purpose. Selection of initial cluster centres has a great impact on performance of clustering algorithms. A heuristic based clustering in which initial centres are selected based on measures defining statistical distribution of data is incorporated in the proposed methodology. The initial phase of proposed framework includes image cropping, green channel extraction, contrast enhancement and application of mathematical closing. Then, the pre-processed image is subjected to heuristic based clustering yielding a binary map. The binary image is post-processed to eliminate unwanted components. Finally, the component which possessed the minimum intensity is finalized as macula and its centre constitutes the fovea. The proposed approach outperforms existing works by reporting that 100%,of HRF, 100% of DRIVE, 96.92% of DIARETDB0, 97.75% of DIARETDB1, 98.81% of HEI-MED, 90% of STARE and 99.33% of MESSIDOR images satisfy the 1R criterion, a standard adopted for evaluating performance of macula and fovea identification. The proposed system thus helps the ophthalmologists in identifying the macula thereby facilitating to identify if any abnormality is present within the macula region. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. 15 CFR 721.2 - Recordkeeping.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... production, processing, consumption, export or import of chemicals. Each facility subject to inspection under... recognized as complete words or numbers. (iv) The system must preserve the initial image (including both... the system. (3) Requirements applicable to a system based on digital images. For systems based on the...

  20. 15 CFR 721.2 - Recordkeeping.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... production, processing, consumption, export or import of chemicals. Each facility subject to inspection under... recognized as complete words or numbers. (iv) The system must preserve the initial image (including both... the system. (3) Requirements applicable to a system based on digital images. For systems based on the...

  1. 15 CFR 721.2 - Recordkeeping.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... production, processing, consumption, export or import of chemicals. Each facility subject to inspection under... recognized as complete words or numbers. (iv) The system must preserve the initial image (including both... the system. (3) Requirements applicable to a system based on digital images. For systems based on the...

  2. 15 CFR 721.2 - Recordkeeping.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... production, processing, consumption, export or import of chemicals. Each facility subject to inspection under... recognized as complete words or numbers. (iv) The system must preserve the initial image (including both... the system. (3) Requirements applicable to a system based on digital images. For systems based on the...

  3. 15 CFR 721.2 - Recordkeeping.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... production, processing, consumption, export or import of chemicals. Each facility subject to inspection under... recognized as complete words or numbers. (iv) The system must preserve the initial image (including both... the system. (3) Requirements applicable to a system based on digital images. For systems based on the...

  4. Empowering potential: a theory of wellness motivation.

    PubMed

    Fleury, J D

    1991-01-01

    Data were collected from 29 individuals who were attempting to initiate and sustain programs of cardiac risk factor modification. Data were analyzed through the technique of constant comparative analysis. Empowering potential, the basic social process identified from the data, explained individual motivation to initiate and sustain cardiovascular health behavior. Empowering potential was a continuous process of individual growth and development which facilitated the emergence of new and positive health patterns. Within the process of empowering potential, individuals use a variety of strategies which guide the initiation and maintenance of health-related change. The process of empowering potential consists of three stages: appraising readiness, changing, and integrating change. Two categories occurred throughout the process of empowering potential: imaging and social support systems. These findings provide a better understanding of how motivated action is initiated and reinitiated over time.

  5. Semi-automated Image Processing for Preclinical Bioluminescent Imaging.

    PubMed

    Slavine, Nikolai V; McColl, Roderick W

    Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.

  6. Reducing noise component on medical images

    NASA Astrophysics Data System (ADS)

    Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana

    2018-04-01

    Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.

  7. Toward Imaging of Small Objects with XUV Radiation

    NASA Astrophysics Data System (ADS)

    Sayrac, Muhammed; Kolomenski, Alexandre A.; Boran, Yakup; Schuessler, Hans

    The coherent diffraction imaging (CDI) technique has the potential to capture high resolution images of nano- or micron-sized structures when using XUV radiation obtained by high harmonic radiation (HHG) process. When a small object is exposed to XUV radiation, a diffraction pattern of the object is created. The advances in the coherent HHG enable obtaining photon flux sufficient for XUV imaging. The diffractive imaging technique from coherent table top XUV beams have made possible nanometer-scale resolution imaging by replacing the imaging optics with a computer reconstruction algorithm. In this study, we present our initial work on diffractive imaging using a tabletop XUV source. The initial investigation of imaging of a micron-sized mesh with an optimized HHG source is demonstrated. This work was supported in part by the Robert A. Welch Foundation Grant No. A1546 and the Qatar Foundation under the grant NPRP 8-735-1-154. M. Sayrac acknowledges support from the Ministry of National Education of the Republic of Turkey.

  8. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study

    PubMed Central

    2018-01-01

    Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962

  9. Processes Modifying Cratered Terrains on Pluto

    NASA Technical Reports Server (NTRS)

    Moore, J. M.

    2015-01-01

    The July encounter with Pluto by the New Horizons spacecraft permitted imaging of its cratered terrains with scales as high as approximately 100 m/pixel, and in stereo. In the initial download of images, acquired at 2.2 km/pixel, widely distributed impact craters up to 260 km diameter are seen in the near-encounter hemisphere. Many of the craters appear to be significantly degraded or infilled. Some craters appear partially destroyed, perhaps by erosion such as associated with the retreat of scarps. Bright ice-rich deposits highlight some crater rims and/or floors. While the cratered terrains identified in the initial downloaded images are generally seen on high-to-intermediate albedo surfaces, the dark equatorial terrain informally known as Cthulhu Regio is also densely cratered. We will explore the range of possible processes that might have operated (or still be operating) to modify the landscape from that of an ancient pristinely cratered state to the present terrains revealed in New Horizons images. The sequence, intensity, and type of processes that have modified ancient landscapes are, among other things, the record of climate and volatile evolution throughout much of the Pluto's existence. The deciphering of this record will be discussed. This work was supported by NASA's New Horizons project.

  10. Application of QC_DR software for acceptance testing and routine quality control of direct digital radiography systems: initial experiences using the Italian Association of Physicist in Medicine quality control protocol.

    PubMed

    Nitrosi, Andrea; Bertolini, Marco; Borasi, Giovanni; Botti, Andrea; Barani, Adriana; Rivetti, Stefano; Pierotti, Luisa

    2009-12-01

    Ideally, medical x-ray imaging systems should be designed to deliver maximum image quality at an acceptable radiation risk to the patient. Quality assurance procedures are employed to ensure that these standards are maintained. A quality control protocol for direct digital radiography (DDR) systems is described and discussed. Software to automatically process and analyze the required images was developed. In this paper, the initial results obtained on equipment of different DDR manufacturers were reported. The protocol was developed to highlight even small discrepancies in standard operating performance.

  11. Quality initiatives: improving patient flow for a bone densitometry practice: results from a Mayo Clinic radiology quality initiative.

    PubMed

    Aakre, Kenneth T; Valley, Timothy B; O'Connor, Michael K

    2010-03-01

    Lean Six Sigma process improvement methodologies have been used in manufacturing for some time. However, Lean Six Sigma process improvement methodologies also are applicable to radiology as a way to identify opportunities for improvement in patient care delivery settings. A multidisciplinary team of physicians and staff conducted a 100-day quality improvement project with the guidance of a quality advisor. By using the framework of DMAIC (define, measure, analyze, improve, and control), time studies were performed for all aspects of patient and technologist involvement. From these studies, value stream maps for the current state and for the future were developed, and tests of change were implemented. Comprehensive value stream maps showed that before implementation of process changes, an average time of 20.95 minutes was required for completion of a bone densitometry study. Two process changes (ie, tests of change) were undertaken. First, the location for completion of a patient assessment form was moved from inside the imaging room to the waiting area, enabling patients to complete the form while waiting for the technologist. Second, the patient was instructed to sit in a waiting area immediately outside the imaging rooms, rather than in the main reception area, which is far removed from the imaging area. Realignment of these process steps, with reduced technologist travel distances, resulted in a 3-minute average decrease in the patient cycle time. This represented a 15% reduction in the initial patient cycle time with no change in staff or costs. Radiology process improvement projects can yield positive results despite small incremental changes.

  12. X-ray agricultural product inspection: segmentation and classification

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit; Lee, Ha-Woon

    1997-09-01

    Processing of real-time x-ray images of randomly oriented and touching pistachio nuts for product inspection is considered. We describe the image processing used to isolate individual nuts (segmentation). This involves a new watershed transform algorithm. Segmentation results on approximately 3000 x-ray (film) and real time x-ray (linescan) nut images were excellent (greater than 99.9% correct). Initial classification results on film images are presented that indicate that the percentage of infested nuts can be reduced to 1.6% of the crop with only 2% of the good nuts rejected; this performance is much better than present manual methods and other automated classifiers have achieved.

  13. Fractal dimension of trabecular bone projection texture is related to three-dimensional microarchitecture.

    PubMed

    Pothuaud, L; Benhamou, C L; Porion, P; Lespessailles, E; Harba, R; Levitz, P

    2000-04-01

    The purpose of this work was to understand how fractal dimension of two-dimensional (2D) trabecular bone projection images could be related to three-dimensional (3D) trabecular bone properties such as porosity or connectivity. Two alteration processes were applied to trabecular bone images obtained by magnetic resonance imaging: a trabeculae dilation process and a trabeculae removal process. The trabeculae dilation process was applied from the 3D skeleton graph to the 3D initial structure with constant connectivity. The trabeculae removal process was applied from the initial structure to an altered structure having 99% of porosity, in which both porosity and connectivity were modified during this second process. Gray-level projection images of each of the altered structures were simply obtained by summation of voxels, and fractal dimension (Df) was calculated. Porosity (phi) and connectivity per unit volume (Cv) were calculated from the 3D structure. Significant relationships were found between Df, phi, and Cv. Df values increased when porosity increased (dilation and removal processes) and when connectivity decreased (only removal process). These variations were in accordance with all previous clinical studies, suggesting that fractal evaluation of trabecular bone projection has real meaning in terms of porosity and connectivity of the 3D architecture. Furthermore, there was a statistically significant linear dependence between Df and Cv when phi remained constant. Porosity is directly related to bone mineral density and fractal dimension can be easily evaluated in clinical routine. These two parameters could be associated to evaluate the connectivity of the structure.

  14. Aerospace Technology Innovation. Volume 10

    NASA Technical Reports Server (NTRS)

    Turner, Janelle (Editor); Cousins, Liz (Editor); Bennett, Evonne (Editor); Vendette, Joel (Editor); West, Kenyon (Editor)

    2002-01-01

    Whether finding new applications for existing NASA technologies or developing unique marketing strategies to demonstrate them, NASA's offices are committed to identifying unique partnering opportunities. Through their efforts NASA leverages resources through joint research and development, and gains new insight into the core areas relevant to all NASA field centers. One of the most satisfying aspects of my job comes when I learn of a mission-driven technology that can be spun-off to touch the lives of everyday people. NASA's New Partnerships in Medical Diagnostic Imaging is one such initiative. Not only does it promise to provide greater dividends for the country's investment in aerospace research, but also to enhance the American quality of life. This issue of Innovation highlights the new NASA-sponsored initiative in medical imaging. Early in 2001, NASA announced the launch of the New Partnerships in Medical Diagnostic Imaging initiative to promote the partnership and commercialization of NASA technologies in the medical imaging industry. NASA and the medical imaging industry share a number of crosscutting technologies in areas such as high-performance detectors and image-processing tools. Many of the opportunities for joint development and technology transfer to the medical imaging market also hold the promise for future spin back to NASA.

  15. A new level set model for cell image segmentation

    NASA Astrophysics Data System (ADS)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  16. Onboard spectral imager data processor

    NASA Astrophysics Data System (ADS)

    Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.

    1999-10-01

    Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.

  17. Combining wet etching and real-time damage event imaging to reveal the most dangerous laser damage initiator in fused silica.

    PubMed

    Hu, Guohang; Zhao, Yuanan; Liu, Xiaofeng; Li, Dawei; Xiao, Qiling; Yi, Kui; Shao, Jianda

    2013-08-01

    A reliable method, combining a wet etch process and real-time damage event imaging during a raster scan laser damage test, has been developed to directly determine the most dangerous precursor inducing low-density laser damage at 355 nm in fused silica. It is revealed that ~16% of laser damage sites were initiated at the place of the scratches, ~49% initiated at the digs, and ~35% initiated at invisible defects. The morphologies of dangerous scratches and digs were compared with those of moderate ones. It is found that local sharp variation at the edge, twist, or inside of a subsurface defect is the most dangerous laser damage precursor.

  18. Fast auto-focus scheme based on optical defocus fitting model

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min

    2018-04-01

    An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.

  19. Image correlation method for DNA sequence alignment.

    PubMed

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  20. What difference reveals about similarity.

    PubMed

    Sagi, Eyal; Gentner, Dedre; Lovett, Andrew

    2012-08-01

    Detecting that two images are different is faster for highly dissimilar images than for highly similar images. Paradoxically, we showed that the reverse occurs when people are asked to describe how two images differ--that is, to state a difference between two images. Following structure-mapping theory, we propose that this disassociation arises from the multistage nature of the comparison process. Detecting that two images are different can be done in the initial (local-matching) stage, but only for pairs with low overlap; thus, "different" responses are faster for low-similarity than for high-similarity pairs. In contrast, identifying a specific difference generally requires a full structural alignment of the two images, and this alignment process is faster for high-similarity pairs. We described four experiments that demonstrate this dissociation and show that the results can be simulated using the Structure-Mapping Engine. These results pose a significant challenge for nonstructural accounts of similarity comparison and suggest that structural alignment processes play a significant role in visual comparison. Copyright © 2012 Cognitive Science Society, Inc.

  1. ImSyn: photonic image synthesis applied to synthetic aperture radar, microscopy, and ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Turpin, Terry M.; Lafuse, James L.

    1993-02-01

    ImSynTM is an image synthesis technology, developed and patented by Essex Corporation. ImSynTM can provide compact, low cost, and low power solutions to some of the most difficult image synthesis problems existing today. The inherent simplicity of ImSynTM enables the manufacture of low cost and reliable photonic systems for imaging applications ranging from airborne reconnaissance to doctor's office ultrasound. The initial application of ImSynTM technology has been to SAR processing; however, it has a wide range of applications such as: image correlation, image compression, acoustic imaging, x-ray tomographic (CAT, PET, SPECT), magnetic resonance imaging (MRI), microscopy, range- doppler mapping (extended TDOA/FDOA). This paper describes ImSynTM in terms of synthetic aperture microscopy and then shows how the technology can be extended to ultrasound and synthetic aperture radar. The synthetic aperture microscope (SAM) enables high resolution three dimensional microscopy with greater dynamic range than real aperture microscopes. SAM produces complex image data, enabling the use of coherent image processing techniques. Most importantly SAM produces the image data in a form that is easily manipulated by a digital image processing workstation.

  2. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    PubMed

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  3. Persistent recruitment of somatosensory cortex during active maintenance of hand images in working memory.

    PubMed

    Galvez-Pol, A; Calvo-Merino, B; Capilla, A; Forster, B

    2018-07-01

    Working memory (WM) supports temporary maintenance of task-relevant information. This process is associated with persistent activity in the sensory cortex processing the information (e.g., visual stimuli activate visual cortex). However, we argue here that more multifaceted stimuli moderate this sensory-locked activity and recruit distinctive cortices. Specifically, perception of bodies recruits somatosensory cortex (SCx) beyond early visual areas (suggesting embodiment processes). Here we explore persistent activation in processing areas beyond the sensory cortex initially relevant to the modality of the stimuli. Using visual and somatosensory evoked-potentials in a visual WM task, we isolated different levels of visual and somatosensory involvement during encoding of body and non-body-related images. Persistent activity increased in SCx only when maintaining body images in WM, whereas visual/posterior regions' activity increased significantly when maintaining non-body images. Our results bridge WM and embodiment frameworks, supporting a dynamic WM process where the nature of the information summons specific processing resources. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Ikonos Imagery Product Nonuniformity Assessment

    NASA Technical Reports Server (NTRS)

    Ryan, Robert; Zanoni, Vicki; Pagnutti, Mary; Holekamp, Kara; Smith, Charles

    2002-01-01

    During the early stages of the NASA Scientific Data Purchase (SDP) program, three approximately equal vertical stripes were observable in the IKONOS imagery of highly spatially uniform sites. Although these effects appeared to be less than a few percent of the mean signal, several investigators requested new imagery. Over time, Space Imaging updated its processing to minimize these artifacts. This however, produced differences in Space Imaging products derived from archive imagery processed at different times. Imagery processed before 2/22/01 is processed with one set of coefficients, while imagery processed after that date requires another set. Space Imaging produces its products from raw imagery, so changes in the ground processing over time can change the delivered digital number (DN) values, even for identical orders of a previously acquired scene. NASA Stennis initiated studies to investigate the magnitude and changes in these artifacts over the lifetime of the system and before and after processing updates.

  5. Reducing Wait Time for Lung Cancer Diagnosis and Treatment: Impact of a Multidisciplinary, Centralized Referral Program.

    PubMed

    Common, Jessica L; Mariathas, Hensley H; Parsons, Kaylah; Greenland, Jonathan D; Harris, Scott; Bhatia, Rick; Byrne, SuzanneC

    2018-06-04

    A multidisciplinary, centralized referral program was established at our institution in 2014 to reduce delays in lung cancer diagnosis and treatment following diagnostic imaging observed with the traditional, primary care provider-led referral process. The main objectives of this retrospective cohort study were to determine if referral to a Thoracic Triage Panel (TTP): 1) expedites lung cancer diagnosis and treatment initiation; and 2) leads to more appropriate specialist consultation. Patients with a diagnosis of lung cancer and initial diagnostic imaging between March 1, 2015, and February 29, 2016, at a Memorial University-affiliated tertiary care centre in St John's, Newfoundland, were identified and grouped according to whether they were referred to the TTP or managed through a traditional referral process. Wait times (in days) from first abnormal imaging to biopsy and treatment initiation were recorded. Statistical analysis was performed using the Wilcoxon rank-sum test. A total of 133 patients who met inclusion criteria were identified. Seventy-nine patients were referred to the TTP and 54 were managed by traditional means. There was a statistically significant reduction in median wait times for patients referred to the TTP. Wait time from first abnormal imaging to biopsy decreased from 61.5 to 36.0 days (P < .0001). Wait time from first abnormal imaging to treatment initiation decreased from 118.0 to 80.0 days (P < .001). The percentage of specialist consultations that led to treatment was also greater for patients referred to the TTP. A collaborative, centralized intake and referral program helps to reduce wait time for diagnosis and treatment of lung cancer. Copyright © 2018 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  6. Sign Language Recognition System using Neural Network for Digital Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Vargas, Lorena P.; Barba, Leiner; Torres, C. O.; Mattos, L.

    2011-01-01

    This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.

  7. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  8. Initial clinical testing of a multi-spectral imaging system built on a smartphone platform

    NASA Astrophysics Data System (ADS)

    Mink, Jonah W.; Wexler, Shraga; Bolton, Frank J.; Hummel, Charles; Kahn, Bruce S.; Levitz, David

    2016-03-01

    Multi-spectral imaging systems are often expensive and bulky. An innovative multi-spectral imaging system was fitted onto a mobile colposcope, an imaging system built around a smartphone in order to image the uterine cervix from outside the body. The multi-spectral mobile colposcope (MSMC) acquires images at different wavelengths. This paper presents the clinical testing of MSMC imaging (technical validation of the MSMC system is described elsewhere 1 ). Patients who were referred to colposcopy following abnormal screening test (Pap or HPV DNA test) according to the standard of care were enrolled. Multi-spectral image sets of the cervix were acquired, consisting of images from the various wavelengths. Image acquisition took 1-2 sec. Areas suspected for dysplasia under white light imaging were biopsied, according to the standard of care. Biopsied sites were recorded on a clockface map of the cervix. Following the procedure, MSMC data was processed from the sites of biopsied sites. To date, the initial histopathological results are still outstanding. Qualitatively, structures in the cervical images were sharper at lower wavelengths than higher wavelengths. Patients tolerated imaging well. The result suggests MSMC holds promise for cervical imaging.

  9. North by Northwestern: initial experience with PACS at Northwestern Memorial Hospital

    NASA Astrophysics Data System (ADS)

    Channin, David S.; Hawkins, Rodney C.; Enzmann, Dieter R.

    2000-05-01

    This paper describes the initial phases and configuration of the Picture Archive and Communication System (PACS) deployed at Northwestern Memorial Hospital. The primary goals of the project were to improve service to patients, improve service to referring physicians, and improve the process of radiology. Secondary goals were to enhance the academic mission, and modernize institutional information systems. The system consists of a large number of heterogeneous imaging modalities sending imaging studies via DICOM to a GE medical Systems PathSpeed PACS. The radiology department workflow is briefly described. The system is currently storing approximately 140,000 studies and over 5 million images, growing by approximately 600 studies and 25,000 images per day. Data reflecting use of the short term and long term storage is provided.

  10. A Content Analysis of Television Ads: Does Current Practice Maximize Cognitive Processing?

    DTIC Science & Technology

    2008-12-11

    ads with arousing content such as sexual imagery and fatty/sweet food imagery have the potential to stress the cognitive processing system. When the...to examine differences in content arousal , this study included variables shown to elicit arousal —loved brands, sexual images, and fatty/sweet food...loved brands as well as ads with sexual and fatty/food images are not all the same—they are not likely to be equally arousing . Initially, brands were

  11. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  12. TheHiveDB image data management and analysis framework.

    PubMed

    Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew

    2014-01-06

    The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative.

  13. TheHiveDB image data management and analysis framework

    PubMed Central

    Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew

    2014-01-01

    The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative. PMID:24432000

  14. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  15. Computer imaging and workflow systems in the business office.

    PubMed

    Adams, W T; Veale, F H; Helmick, P M

    1999-05-01

    Computer imaging and workflow technology automates many business processes that currently are performed using paper processes. Documents are scanned into the imaging system and placed in electronic patient account folders. Authorized users throughout the organization, including preadmission, verification, admission, billing, cash posting, customer service, and financial counseling staff, have online access to the information they need when they need it. Such streamlining of business functions can increase collections and customer satisfaction while reducing labor, supply, and storage costs. Because the costs of a comprehensive computer imaging and workflow system can be considerable, healthcare organizations should consider implementing parts of such systems that can be cost-justified or include implementation as part of a larger strategic technology initiative.

  16. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study.

    PubMed

    Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek

    2018-04-26

    Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.

  17. Northeast Artificial Intelligence Consortium Annual Report for 1987. Volume 4. Research in Automated Photointerpretation

    DTIC Science & Technology

    1989-03-01

    KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection

  18. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  19. A modeling analysis program for the JPL table mountain Io sodium cloud

    NASA Technical Reports Server (NTRS)

    Smyth, W. H.; Goldberg, B. A.

    1985-01-01

    Progress and achievements in the first year are discussed in three main areas: (1) review and assessment of the massive JPL Table Mountain Io sodium cloud data set, (2) formulation and execution of a plan to perform further processing of this data set, and (3) initiation of modeling activities. The complete 1976-79 and 1981 data sets are reviewed. Particular emphasis is placed on the superior 1981 Region B/C images which provide a rich base of information for studying the structure and escape of gases from Io as well as possible east-west and magnetic longitudinal asymmetries in the plasma torus. A data processing plan is developed and is undertaken by the Multimission Image Processing Laboratory of JPL for the purpose of providing a more refined and complete data set for our modeling studies in the second year. Modeling priorities are formulated and initial progress in achieving these goals is reported.

  20. Automation of image data processing. (Polish Title: Automatyzacja proces u przetwarzania danych obrazowych)

    NASA Astrophysics Data System (ADS)

    Preuss, R.

    2014-12-01

    This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system

  1. Multispectral and geomorphic studies of processed Voyager 2 images of Europa

    NASA Technical Reports Server (NTRS)

    Meier, T. A.

    1984-01-01

    High resolution images of Europa taken by the Voyager 2 spacecraft were used to study a portion of Europa's dark lineations and the major white line feature Agenor Linea. Initial image processing of images 1195J2-001 (violet filter), 1198J2-001 (blue filter), 1201J2-001 (orange filter), and 1204J2-001 (ultraviolet filter) was performed at the U.S.G.S. Branch of Astrogeology in Flagstaff, Arizona. Processing was completed through the stages of image registration and color ratio image construction. Pixel printouts were used in a new technique of linear feature profiling to compensate for image misregistration through the mapping of features on the printouts. In all, 193 dark lineation segments were mapped and profiled. The more accurate multispectral data derived by this method was plotted using a new application of the ternary diagram, with orange, blue, and violet relative spectral reflectances serving as end members. Statistical techniques were then applied to the ternary diagram plots. The image products generated at LPI were used mainly to cross-check and verify the results of the ternary diagram analysis.

  2. Processing system of jaws tomograms for pathology identification and surgical guide modeling

    NASA Astrophysics Data System (ADS)

    Putrik, M. B.; Lavrentyeva, Yu. E.; Ivanov, V. Yu.

    2015-11-01

    The aim of the study is to create an image processing system, which allows dentists to find pathological resorption and to build surgical guide surface automatically. X-rays images of jaws from cone beam tomography or spiral computed tomography are the initial data for processing. One patient's examination always includes up to 600 images (or tomograms), that's why the development of processing system for fast automation search of pathologies is necessary. X-rays images can be useful not for only illness diagnostic but for treatment planning too. We have studied the case of dental implantation - for successful surgical manipulations surgical guides are used. We have created a processing system that automatically builds jaw and teeth boundaries on the x-ray image. After this step, obtained teeth boundaries used for surgical guide surface modeling and jaw boundaries limit the area for further pathologies search. Criterion for the presence of pathological resorption zones inside the limited area is based on statistical investigation. After described actions, it is possible to manufacture surgical guide using 3D printer and apply it in surgical operation.

  3. Alternative Packaging for Back-Illuminated Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2009-01-01

    An alternative scheme has been conceived for packaging of silicon-based back-illuminated, back-side-thinned complementary metal oxide/semiconductor (CMOS) and charge-coupled-device image-detector integrated circuits, including an associated fabrication process. This scheme and process are complementary to those described in "Making a Back-Illuminated Imager With Back-Side Connections" (NPO-42839), NASA Tech Briefs, Vol. 32, No. 7 (July 2008), page 38. To avoid misunderstanding, it should be noted that in the terminology of imaging integrated circuits, "front side" or "back side" does not necessarily refer to the side that, during operation, faces toward or away from a source of light or other object to be imaged. Instead, "front side" signifies that side of a semiconductor substrate upon which the pixel pattern and the associated semiconductor devices and metal conductor lines are initially formed during fabrication, and "back side" signifies the opposite side. If the imager is of the type called "back-illuminated," then the back side is the one that faces an object to be imaged. Initially, a back-illuminated, back-side-thinned image-detector is fabricated with its back side bonded to a silicon handle wafer. At a subsequent stage of fabrication, the front side is bonded to a glass wafer (for mechanical support) and the silicon handle wafer is etched away to expose the back side. The frontside integrated circuitry includes metal input/output contact pads, which are rendered inaccessible by the bonding of the front side to the glass wafer. Hence, one of the main problems is to make the input/output contact pads accessible from the back side, which is ultimately to be the side accessible to the external world. The present combination of an alternative packaging scheme and associated fabrication process constitute a solution of the problem.

  4. Automated aerial image based CD metrology initiated by pattern marking with photomask layout data

    NASA Astrophysics Data System (ADS)

    Davis, Grant; Choi, Sun Young; Jung, Eui Hee; Seyfarth, Arne; van Doornmalen, Hans; Poortinga, Eric

    2007-05-01

    The photomask is a critical element in the lithographic image transfer process from the drawn layout to the final structures on the wafer. The non-linearity of the imaging process and the related MEEF impose a tight control requirement on the photomask critical dimensions. Critical dimensions can be measured in aerial images with hardware emulation. This is a more recent complement to the standard scanning electron microscope measurement of wafers and photomasks. Aerial image measurement includes non-linear, 3-dimensional, and materials effects on imaging that cannot be observed directly by SEM measurement of the mask. Aerial image measurement excludes the processing effects of printing and etching on the wafer. This presents a unique contribution to the difficult process control and modeling tasks in mask making. In the past, aerial image measurements have been used mainly to characterize the printability of mask repair sites. Development of photomask CD characterization with the AIMS TM tool was motivated by the benefit of MEEF sensitivity and the shorter feedback loop compared to wafer exposures. This paper describes a new application that includes: an improved interface for the selection of meaningful locations using the photomask and design layout data with the Calibre TM Metrology Interface, an automated recipe generation process, an automated measurement process, and automated analysis and result reporting on a Carl Zeiss AIMS TM system.

  5. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  6. Electronic Photography at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Holm, Jack; Judge, Nancianne

    1995-01-01

    An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.

  7. An Integrated Approach to Indoor and Outdoor Localization

    DTIC Science & Technology

    2017-04-17

    localization estimate, followed by particle filter based tracking. Initial localization is performed using WiFi and image observations. For tracking we...source. A two-step process is proposed that performs an initial localization es-timate, followed by particle filter based t racking. Initial...mapped, it is possible to use them for localization [20, 21, 22]. Haverinen et al. show that these fields could be used with a particle filter to

  8. Computer-assisted image processing to detect spores from the fungus Pandora neoaphidis.

    PubMed

    Korsnes, Reinert; Westrum, Karin; Fløistad, Erling; Klingen, Ingeborg

    2016-01-01

    This contribution demonstrates an example of experimental automatic image analysis to detect spores prepared on microscope slides derived from trapping. The application is to monitor aerial spore counts of the entomopathogenic fungus Pandora neoaphidis which may serve as a biological control agent for aphids. Automatic detection of such spores can therefore play a role in plant protection. The present approach for such detection is a modification of traditional manual microscopy of prepared slides, where autonomous image recording precedes computerised image analysis. The purpose of the present image analysis is to support human visual inspection of imagery data - not to replace it. The workflow has three components:•Preparation of slides for microscopy.•Image recording.•Computerised image processing where the initial part is, as usual, segmentation depending on the actual data product. Then comes identification of blobs, calculation of principal axes of blobs, symmetry operations and projection on a three parameter egg shape space.

  9. A novel image toggle tool for comparison of serial mammograms: automatic density normalization and alignment-development of the tool and initial experience.

    PubMed

    Honda, Satoshi; Tsunoda, Hiroko; Fukuda, Wataru; Saida, Yukihisa

    2014-12-01

    The purpose is to develop a new image toggle tool with automatic density normalization (ADN) and automatic alignment (AA) for comparing serial digital mammograms (DMGs). We developed an ADN and AA process to compare the images of serial DMGs. In image density normalization, a linear interpolation was applied by taking two points of high- and low-brightness areas. The alignment was calculated by determining the point of the greatest correlation while shifting the alignment between the current and prior images. These processes were performed on a PC with a 3.20-GHz Xeon processor and 8 GB of main memory. We selected 12 suspected breast cancer patients who had undergone screening DMGs in the past. Automatic processing was retrospectively performed on these images. Two radiologists subjectively evaluated them. The process of the developed algorithm took approximately 1 s per image. In our preliminary experience, two images could not be aligned approximately. When they were aligned, image toggling allowed detection of differences between examinations easily. We developed a new tool to facilitate comparative reading of DMGs on a mammography viewing system. Using this tool for toggling comparisons might improve the interpretation efficiency of serial DMGs.

  10. Multi-parametric studies of electrically-driven flyer plates

    NASA Astrophysics Data System (ADS)

    Neal, William; Bowden, Michael; Explosive Trains; Devices Collaboration

    2015-06-01

    Exploding foil initiator (EFI) detonators function by the acceleration of a flyer plate, by the electrical explosion of a metallic bridge, into an explosive pellet. The length, and therefore time, scales of this shock initation process is dominated by the magnitude and duration of the imparted shock pulse. To predict the dynamics of this initiation, it is critical to further understand the velocity, shape and thickness of this flyer plate. This study uses multi-parametric diagnostics to investigate the geometry and velocity of the flyer plate upon impact including the imparted electrical energy: photon Doppler velocimetry (PDV), dual axis imaging, time-resolved impact imaging, voltage and current. The investigation challenges the validity of traditional assumptions about the state of the flyer plate at impact and discusses the improved understanding of the process.

  11. Localization of the transverse processes in ultrasound for spinal curvature measurement

    NASA Astrophysics Data System (ADS)

    Kamali, Shahrokh; Ungi, Tamas; Lasso, Andras; Yan, Christina; Lougheed, Matthew; Fichtinger, Gabor

    2017-03-01

    PURPOSE: In scoliosis monitoring, tracked ultrasound has been explored as a safer imaging alternative to traditional radiography. The use of ultrasound in spinal curvature measurement requires identification of vertebral landmarks such as transverse processes, but as bones have reduced visibility in ultrasound imaging, skeletal landmarks are typically segmented manually, which is an exceedingly laborious and long process. We propose an automatic algorithm to segment and localize the surface of bony areas in the transverse process for scoliosis in ultrasound. METHODS: The algorithm uses cascade of filters to remove low intensity pixels, smooth the image and detect bony edges. By applying first differentiation, candidate bony areas are classified. The average intensity under each area has a correlation with the possibility of a shadow, and areas with strong shadow are kept for bone segmentation. The segmented images are used to reconstruct a 3-D volume to represent the whole spinal structure around the transverse processes. RESULTS: A comparison between the manual ground truth segmentation and the automatic algorithm in 50 images showed 0.17 mm average difference. The time to process all 1,938 images was about 37 Sec. (0.0191 Sec. / Image), including reading the original sequence file. CONCLUSION: Initial experiments showed the algorithm to be sufficiently accurate and fast for segmentation transverse processes in ultrasound for spinal curvature measurement. An extensive evaluation of the method is currently underway on images from a larger patient cohort and using multiple observers in producing ground truth segmentation.

  12. The Next Generation of HLA Image Products

    NASA Astrophysics Data System (ADS)

    Gaffney, N. I.; Casertano, S.; Ferguson, B.

    2012-09-01

    We present the re-engineered pipeline based on existing and improved algorithms with the aim of improving processing quality, cross-instrument portability, data flow management, and software maintenance. The Hubble Legacy Archive (HLA) is a project to add value to the Hubble Space Telescope data archive by producing and delivering science-ready drizzled data products and source lists derived from these products. Initially, ACS, NICMOS, and WFCP2 data were combined using instrument-specific pipelines based on scripts developed to process the ACS GOODS data and a separate set of scripts to generate source extractor and DAOPhot source lists. The new pipeline, initially designed for WFC3 data, isolates instrument-specific processing and is easily extendable to other instruments and to generating wide-area mosaics. Significant improvements have been made in image combination using improved alignment, source detection, and background equalization routines. It integrates improved alignment procedures, better noise model, and source list generation within a single code base. Wherever practical, PyRAF based routines have been replaced with non-IRAF based python libraries (e.g. NumPy and PyFITS). The data formats have been modified to handle better and more consistent propagation of information from individual exposures to the combined products. A new exposure layer stores the effective exposure time for each pixel in the sky which is key in properly interpreting combined images from diverse data that were not initially planned to be mosaiced. We worked to improve the validity of the metadata within our FITS headers for these products relative to standard IRAF/PyRAF processing. Any keywords that pertain to individual exposures have been removed from the primary and extension headers and placed in a table extension for more direct and efficient perusal. This mechanism also allows for more detailed information on the processing of individual images to be stored and propagated providing a more hierarchical metadata storage system than key value pair FITS headers provide. In this poster we will discuss the changes to the pipeline processing and source list generation and the lessons learned which may be applicable to other archive projects as well as discuss our new metadata curation and preservation process.

  13. Safe patient handling in diagnostic imaging.

    PubMed

    Murphey, Susan L

    2010-01-01

    Raising awareness of the risk to diagnostic imaging personnel from manually lifting, transferring, and repositioning patients is critical to improving workplace safety and staff utilization. The aging baby boomer generation and growing bariatric population exacerbate the problem. Also, legislative initiatives are increasing nationwide for hospitals to implement safe patient handling programs. A management process designed to improve working conditions through implementing ergonomic programs can reduce losses and improve productivity and patient care outcome measures for imaging departments.

  14. Performance of the dark energy camera liquid nitrogen cooling system

    NASA Astrophysics Data System (ADS)

    Cease, H.; Alvarez, M.; Alvarez, R.; Bonati, M.; Derylo, G.; Estrada, J.; Flaugher, B.; Flores, R.; Lathrop, A.; Munoz, F.; Schmidt, R.; Schmitt, R. L.; Schultz, K.; Kuhlmann, S.; Zhao, A.

    2014-01-01

    The Dark Energy Camera, the Imager and its cooling system was installed onto the Blanco 4m telescope at the Cerro Tololo Inter-American Observatory in Chile in September 2012. The imager cooling system is a LN2 two-phase closed loop cryogenic cooling system. The cryogenic circulation processing is located off the telescope. Liquid nitrogen vacuum jacketed transfer lines are run up the outside of the telescope truss tubes to the imager inside the prime focus cage. The design of the cooling system along with commissioning experiences and initial cooling system performance is described. The LN2 cooling system with the DES imager was initially operated at Fermilab for testing, then shipped and tested in the Blanco Coudé room. Now the imager is operating inside the prime focus cage. It is shown that the cooling performance sufficiently cools the imager in a closed loop mode, which can operate for extended time periods without maintenance or LN2 fills.

  15. Visit, revamp, and revitalize your business plan: Part 2.

    PubMed

    Waldron, David

    2011-01-01

    The diagnostic imaging department strives for the highest quality outcomes in imaging quality, in diagnostic reporting, and in providing a caring patient experience while also satisfying the needs of referring physicians. Understand how tools such as process mapping and concepts such as Six Sigma and Lean Six Sigma can be used to facilitate quality improvements and team building, resulting in staff led process improvement initiatives. Discover how to integrate a continuous staff management cycle to implement process improvements,capture the promised performance improvements, and achieve a culture change away from the "way it has always been done".

  16. Within-subject template estimation for unbiased longitudinal image analysis.

    PubMed

    Reuter, Martin; Schmansky, Nicholas J; Rosas, H Diana; Fischl, Bruce

    2012-07-16

    Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. A new approach for reducing beam hardening artifacts in polychromatic X-ray computed tomography using more accurate prior image.

    PubMed

    Wang, Hui; Xu, Yanan; Shi, Hongli

    2018-03-15

    Metal artifacts severely degrade CT image quality in clinical diagnosis, which are difficult to removed, especially for the beam hardening artifacts. The metal artifact reduction (MAR) based on prior images are the most frequently-used methods. However, there exists a lot misclassification in most prior images caused by absence of prior information such as spectrum distribution of X-ray beam source, especially when multiple or big metal are included. This work aims is to identify a more accurate prior image to improve image quality. The proposed method includes four steps. First, the metal image is segmented by thresholding an initial image, where the metal traces are identified in the initial projection data using the forward projection of the metal image. Second, the accurate absorbent model of certain metal image is calculated according to the spectrum distribution of certain X-ray beam source and energy-dependent attenuation coefficients of metal. Third, a new metal image is reconstructed by the general analytical reconstruction algorithm such as filtered back projection (FPB). The prior image is obtained by segmenting the difference image between the initial image and the new metal image into air, tissue and bone. Fourth, the initial projection data are normalized by dividing the projection data of prior image pixel to pixel. The final corrected image is obtained by interpolation, denormalization and reconstruction. Several clinical images with dental fillings and knee prostheses were used to evaluate the proposed algorithm and normalized metal artifact reduction (NMAR) and linear interpolation (LI) method. The results demonstrate the artifacts were reduced efficiently by the proposed method. The proposed method could obtain an exact prior image using the prior information about X-ray beam source and energy-dependent attenuation coefficients of metal. As a result, better performance of reducing beam hardening artifacts can be achieved. Moreover, the process of the proposed method is rather simple and little extra calculation burden is necessary. It has superiorities over other algorithms when include multiple and/or big implants.

  18. Processing electronic photos of Mercury produced by ground based observation

    NASA Astrophysics Data System (ADS)

    Ksanfomality, Leonid

    New images of Mercury have been obtained by processing of ground based observations that were carried out using the short exposure technique. The disk of the planet extendeds usually from 6 to 7 arc seconds, with the linear size of the image in a focal plane of the telescope about 0.3-0.5 mm on the average. Processing initial millisecond electronic photos of the planet is very labour-consuming. Some features of processing of initial millisecond electronic photos by methods of correlation stacking were considered in (Ksanfomality et al., 2005; Ksanfomality and Sprague, 2007). The method uses manual selection of good photos including a so-called pilot- file, the search for which usually must be done manually. The pilot-file is the most successful one, in opinion of the operator. It defines the future result of the stacking. To change pilot-files increases the labor of processing many times. Programs of processing analyze the contents of a sample, find in it any details, and search for recurrence of these almost imperceptible details in thousand of other stacking electronic pictures. If, proceeding from experience, the form and position of a pilot-file still can be estimated, the estimation of a reality of barely distinct details in it is somewhere in between the imaging and imagination. In 2006-07 some programs of automatic processing have been created. Unfortunately, the efficiency of all automatic programs is not as good as manual selection. Together with the selection, some other known methods are used. The point spread function (PSF) is described by a known mathematical function which in its central part decreases smoothly from the center. Usually the width of this function is accepted at a level 0.7 or 0.5 of the maxima. If many thousands of initial electronic pictures are acquired, it is possible during their processing to take advantage of known statistics of random variables and to choose the width of the function at a level, say, 0.9 maxima. Then the resolution of the image improves appreciably. The essential element of processing is the mathematical model of unsharp mask. But this is a two-edged instrument. The result depends on a choice of the size of the mask. If the size is too small, all low spatial frequencies will be lost, and the image becomes grey uniformly; on the contrary, if the size of the unsharp mask is too great, all fine details disappear. In some cases the compromise in selection of parameters of the unsharp mask becomes critical.

  19. Different methods of image segmentation in the process of meat marbling evaluation

    NASA Astrophysics Data System (ADS)

    Ludwiczak, A.; Ślósarz, P.; Lisiak, D.; Przybylak, A.; Boniecki, P.; Stanisz, M.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Janczak, D.; Bykowska, M.

    2015-07-01

    The level of marbling in meat assessment based on digital images is very popular, as computer vision tools are becoming more and more advanced. However considering muscle cross sections as the data source for marbling level evaluation, there are still a few problems to cope with. There is a need for an accurate method which would facilitate this evaluation procedure and increase its accuracy. The presented research was conducted in order to compare the effect of different image segmentation tools considering their usefulness in meat marbling evaluation on the muscle anatomical cross - sections. However this study is considered to be an initial trial in the presented field of research and an introduction to ultrasonic images processing and analysis.

  20. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  1. 2.5D transient electromagnetic inversion with OCCAM method

    NASA Astrophysics Data System (ADS)

    Li, R.; Hu, X.

    2016-12-01

    In the application of time-domain electromagnetic method (TEM), some multidimensional inversion schemes are applied for imaging in the past few decades to overcome great error produced by 1D model inversion when the subsurface structure is complex. The current mainstream multidimensional inversion for EM data, with the finite-difference time-domain (FDTD) forward method, mainly implemented by Nonlinear Conjugate Gradient (NLCG). But the convergence rate of NLCG heavily depends on Lagrange multiplier and maybe fail to converge. We use the OCCAM inversion method to avoid the weakness. OCCAM inversion is proven to be a more stable and reliable method to image the subsurface 2.5D electrical conductivity. Firstly, we simulate the 3D transient EM fields governed by Maxwell's equations with FDTD method. Secondly, we use the OCCAM inversion scheme with the appropriate objective error functional we established to image the 2.5D structure. And the data space OCCAM's inversion (DASOCC) strategy based on OCCAM scheme were given in this paper. The sensitivity matrix is calculated with the method of time-integrated back-propagated fields. Imaging result of example model shown in Fig. 1 have proven that the OCCAM scheme is an efficient inversion method for TEM with FDTD method. The processes of the inversion iterations have shown the great ability of convergence with few iterations. Summarizing the process of the imaging, we can make the following conclusions. Firstly, the 2.5D imaging in FDTD system with OCCAM inversion demonstrates that we could get desired imaging results for the resistivity structure in the homogeneous half-space. Secondly, the imaging results usually do not over-depend on the initial model, but the iteration times can be reduced distinctly if the background resistivity of initial model get close to the truthful model. So it is batter to set the initial model based on the other geologic information in the application. When the background resistivity fit the truthful model well, the imaging of anomalous body only need a few iteration steps. Finally, the speed of imaging vertical boundaries is slower than the speed of imaging the horizontal boundaries.

  2. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    NASA Astrophysics Data System (ADS)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  3. A Mathematical Model for Storage and Recall of Images using Targeted Synchronization of Coupled Maps.

    PubMed

    Palaniyandi, P; Rangarajan, Govindan

    2017-08-21

    We propose a mathematical model for storage and recall of images using coupled maps. We start by theoretically investigating targeted synchronization in coupled map systems wherein only a desired (partial) subset of the maps is made to synchronize. A simple method is introduced to specify coupling coefficients such that targeted synchronization is ensured. The principle of this method is extended to storage/recall of images using coupled Rulkov maps. The process of adjusting coupling coefficients between Rulkov maps (often used to model neurons) for the purpose of storing a desired image mimics the process of adjusting synaptic strengths between neurons to store memories. Our method uses both synchronisation and synaptic weight modification, as the human brain is thought to do. The stored image can be recalled by providing an initial random pattern to the dynamical system. The storage and recall of the standard image of Lena is explicitly demonstrated.

  4. Live imaging of developmental processes in a living meristem of Davidia involucrata (Nyssaceae)

    PubMed Central

    Jerominek, Markus; Bull-Hereñu, Kester; Arndt, Melanie; Claßen-Bockhoff, Regine

    2014-01-01

    Morphogenesis in plants is usually reconstructed by scanning electron microscopy and histology of meristematic structures. These techniques are destructive and require many samples to obtain a consecutive series of states. Unfortunately, using this methodology the absolute timing of growth and complete relative initiation of organs remain obscure. To overcome this limitation, an in vivo observational method based on Epi-Illumination Light Microscopy (ELM) was developed and tested with a male inflorescence meristem (floral unit) of the handkerchief tree Davidia involucrata Baill. (Nyssaceae). We asked whether the most basal flowers of this floral unit arise in a basipetal sequence or, alternatively, are delayed in their development. The growing meristem was observed for 30 days, the longest live observation of a meristem achieved to date. The sequence of primordium initiation indicates a later initiation of the most basal flowers and not earlier or simultaneously as SEM images could suggest. D. involucrata exemplarily shows that live-ELM gives new insights into developmental processes of plants. In addition to morphogenetic questions such as the transition from vegetative to reproductive meristems or the absolute timing of ontogenetic processes, this method may also help to quantify cellular growth processes in the context of molecular physiology and developmental genetics studies. PMID:25431576

  5. Live imaging of developmental processes in a living meristem of Davidia involucrata (Nyssaceae).

    PubMed

    Jerominek, Markus; Bull-Hereñu, Kester; Arndt, Melanie; Claßen-Bockhoff, Regine

    2014-01-01

    Morphogenesis in plants is usually reconstructed by scanning electron microscopy and histology of meristematic structures. These techniques are destructive and require many samples to obtain a consecutive series of states. Unfortunately, using this methodology the absolute timing of growth and complete relative initiation of organs remain obscure. To overcome this limitation, an in vivo observational method based on Epi-Illumination Light Microscopy (ELM) was developed and tested with a male inflorescence meristem (floral unit) of the handkerchief tree Davidia involucrata Baill. (Nyssaceae). We asked whether the most basal flowers of this floral unit arise in a basipetal sequence or, alternatively, are delayed in their development. The growing meristem was observed for 30 days, the longest live observation of a meristem achieved to date. The sequence of primordium initiation indicates a later initiation of the most basal flowers and not earlier or simultaneously as SEM images could suggest. D. involucrata exemplarily shows that live-ELM gives new insights into developmental processes of plants. In addition to morphogenetic questions such as the transition from vegetative to reproductive meristems or the absolute timing of ontogenetic processes, this method may also help to quantify cellular growth processes in the context of molecular physiology and developmental genetics studies.

  6. Initial orienting towards sexually relevant stimuli: preliminary evidence from eye movement measures.

    PubMed

    Fromberger, Peter; Jordan, Kirsten; von Herder, Jakob; Steinkrauss, Henrike; Nemetschek, Rebekka; Stolpmann, Georg; Müller, Jürgen Leo

    2012-08-01

    It has been proposed that sexual stimuli will be processed in a comparable manner to other evolutionarily meaningful stimuli (such as spiders or snakes) and therefore elicit an attentional bias and more attentional engagement (Spiering and Everaerd, In E. Janssen (Ed.), The psychophysiology of sex (pp. 166-183). Bloomington: Indiana University Press, 2007). To investigate early and late attentional processes while looking at sexual stimuli, heterosexual men (n = 12) viewed pairs of sexually preferred (images of women) and sexually non-preferred images (images of girls, boys or men), while eye movements were measured. Early attentional processing (initial orienting) was assessed by the number of first fixations and late attentional processing (maintenance of attention) was assessed by relative fixation time. Results showed that relative fixation time was significantly longer for sexually preferred stimuli than for sexually non-preferred stimuli. Furthermore, the first fixation was more often directed towards the preferred sexual stimulus, when simultaneously presented with a non-sexually preferred stimulus. Thus, the current study showed for the first time an attentional bias to sexually relevant stimuli when presented simultaneously with sexually irrelevant pictures. This finding, along with the discovery that heterosexual men maintained their attention to sexually relevant stimuli, highlights the importance of investigating early and late attentional processes while viewing sexual stimuli. Furthermore, the current study showed that sexually relevant stimuli are favored by the human attentional system.

  7. Integrating two-photon microscopy and cryo-electron microscopy for studying the interaction of Cafeteria roenbergensis and CroV

    NASA Astrophysics Data System (ADS)

    Aghvami, Seyedmohammadali

    Cafeteria roenbergensis (Cro) is a marine zooplankton; its voracious appetite plays a significant role in regulating bacteria populations. The giant virus that lives within Cro, known as Cafeteria roenbergensis virus (CroV), has an important effect on the mortality of Cro populations. Although viral infections are extremely abundant in oceans, the complete procedure of the infection is still unknown. We study the infection process of Cro by CroV to find out whether the initial contact is through phagocytosis or CroV penetrating the host cell membrane directly. Cro is a moving at speed in the range of 10-100 um/s, therefore, there are many difficulties and challenges for traditional imaging techniques to study this viral-host interaction. We apply two-photon fluorescence microscopy to image this infection process. The image is taken at video rate (30 frame/s), which makes us able to catch the moment of interaction. We are able to image host and virus simultaneously where CroV is stained by SYBR gold dye and Cro is excited through NADH autofluorescence. For further structural biology study, we will obtain atomic level resolution information of infection. After catching the initial moment of infection, we will freeze the sample instantly and image it with cryo-electron microscope .

  8. Initial test of MITA/DIMM with an operational CBP system

    NASA Astrophysics Data System (ADS)

    Baldwin, Kevin; Hanna, Randall; Brown, Andrea; Brown, David; Moyer, Steven; Hixson, Jonathan G.

    2018-05-01

    The MITA (Motion Imagery Task Analyzer) project was conceived by CBP OA (Customs and Border Protection - Office of Acquisition) and executed by JHU/APL (Johns Hopkins University/Applied Physics Laboratory) and CERDEC NVESD MSD (Communications and Electronics Research Development Engineering Command Night Vision and Electronic Sensors Directorate Modeling and Simulation Division). The intent was to develop an efficient methodology whereby imaging system performance could be quickly and objectively characterized in a field setting. The initial design, development, and testing spanned a period of approximately 18 months with the initial project coming to a conclusion after testing of the MITA system in June 2017 with a fielded CBP system. The NVESD contribution to MITA was thermally heated target resolution boards deployed to support a range close to the sensor and, when possible, at range with the targets of interest. JHU/APL developed a laser DIMM (Differential Image Motion Monitor) system designed to measure the optical turbulence present along the line of sight of the imaging system during the time of image collection. The imagery collected of the target board was processed to calculate the in situ system resolution. This in situ imaging system resolution and the time-correlated turbulence measured by the DIMM system were used in NV-IPM (Night Vision Integrated Performance Model) to calculate the theoretical imaging system performance. Overall, this proves the MITA concept feasible. However, MITA is still in the initial phases of development and requires further verification and validation to ensure accuracy and reliability of both the instrument and the imaging system performance predictions.

  9. Bio-inspired color sketch for eco-friendly printing

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Tolstaya, Ekaterina V.; Rychagov, Michael N.; Lee, Hokeun; Kim, Sang Ho; Choi, Donchul

    2012-01-01

    Saving of toner/ink consumption is an important task in modern printing devices. It has a positive ecological and social impact. We propose technique for converting print-job pictures to a recognizable and pleasant color sketches. Drawing a "pencil sketch" from a photo relates to a special area in image processing and computer graphics - non-photorealistic rendering. We describe a new approach for automatic sketch generation which allows to create well-recognizable sketches and to preserve partly colors of the initial picture. Our sketches contain significantly less color dots then initial images and this helps to save toner/ink. Our bio-inspired approach is based on sophisticated edge detection technique for a mask creation and multiplication of source image with increased contrast by this mask. To construct the mask we use DoG edge detection, which is a result of blending of initial image with its blurred copy through the alpha-channel, which is created from Saliency Map according to Pre-attentive Human Vision model. Measurement of percentage of saved toner and user study proves effectiveness of proposed technique for toner saving in eco-friendly printing mode.

  10. Noise Power Spectrum in PROPELLER MR Imaging.

    PubMed

    Ichinoseki, Yuki; Nagasaka, Tatsuo; Miyamoto, Kota; Tamura, Hajime; Mori, Issei; Machida, Yoshio

    2015-01-01

    The noise power spectrum (NPS), an index for noise evaluation, represents the frequency characteristics of image noise. We measured the NPS in PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) magnetic resonance (MR) imaging, a nonuniform data sampling technique, as an initial study for practical MR image evaluation using the NPS. The 2-dimensional (2D) NPS reflected the k-space sampling density and showed agreement with the shape of the k-space trajectory as expected theoretically. Additionally, the 2D NPS allowed visualization of a part of the image reconstruction process, such as filtering and motion correction.

  11. Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Jin, Guanghu; Dong, Zhen

    2018-04-01

    Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.

  12. Techniques for identifying dust devils in mars pathfinder images

    USGS Publications Warehouse

    Metzger, S.M.; Carr, J.R.; Johnson, J. R.; Parker, T.J.; Lemmon, M.T.

    2000-01-01

    Image processing methods used to identify and enhance dust devil features imaged by IMP (Imager for Mars Pathfinder) are reviewed. Spectral differences, visible red minus visible blue, were used for initial dust devil searches, driven by the observation that Martian dust has high red and low blue reflectance. The Martian sky proved to be more heavily dust-laden than pre-Pathfinder predictions, based on analysis of images from the Hubble Space Telescope. As a result, these initial spectral difference methods failed to contrast dust devils with background dust haze. Imager artifacts (dust motes on the camera lens, flat-field effects caused by imperfections in the CCD, and projection onto a flat sensor plane by a convex lens) further impeded the ability to resolve subtle dust devil features. Consequently, reference images containing sky with a minimal horizon were first subtracted from each spectral filter image to remove camera artifacts and reduce the background dust haze signal. Once the sky-flat preprocessing step was completed, the red-minus-blue spectral difference scheme was attempted again. Dust devils then were successfully identified as bright plumes. False-color ratios using calibrated IMP images were found useful for visualizing dust plumes, verifying initial discoveries as vortex-like features. Enhancement of monochromatic (especially blue filter) images revealed dust devils as silhouettes against brighter background sky. Experiments with principal components transformation identified dust devils in raw, uncalibrated IMP images and further showed relative movement of dust devils across the Martian surface. A variety of methods therefore served qualitative and quantitative goals for dust plume identification and analysis in an environment where such features are obscure.

  13. Efficient generation of discontinuity-preserving adaptive triangulations from range images.

    PubMed

    Garcia, Miguel Angel; Sappa, Angel Domingo

    2004-10-01

    This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.

  14. USC orthogonal multiprocessor for image processing with neural networks

    NASA Astrophysics Data System (ADS)

    Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid

    1990-07-01

    This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.

  15. Mountain building processes in the Central Andes

    NASA Technical Reports Server (NTRS)

    Bloom, A. L.; Isacks, B. L.

    1986-01-01

    False color composite images of the Thematic Mapper (TM) bands 5, 4, and 2 were examined to make visual interpretations of geological features. The use of the roam mode of image display with the International Imaging Systems (IIS) System 600 image processing package running on the IIS Model 75 was very useful. Several areas in which good comparisons with ground data existed, were examined in detail. Parallel to the visual approach, image processing methods are being developed which allow the complete use of the seven TM bands. The data was organized into easily accessible files and a visual cataloging of the quads (quarter TM scenes) with preliminary registration with the best available charts for the region. The catalog has proved to be a valuable tool for the rapid scanning of quads for a specific investigation. Integration of the data into a complete approach to the problems of uplift, deformation, and magnetism in relation to the Nazca-South American plate interaction is at an initial stage.

  16. Mountain building processes in the Central Andes

    NASA Astrophysics Data System (ADS)

    Bloom, A. L.; Isacks, B. L.

    False color composite images of the Thematic Mapper (TM) bands 5, 4, and 2 were examined to make visual interpretations of geological features. The use of the roam mode of image display with the International Imaging Systems (IIS) System 600 image processing package running on the IIS Model 75 was very useful. Several areas in which good comparisons with ground data existed, were examined in detail. Parallel to the visual approach, image processing methods are being developed which allow the complete use of the seven TM bands. The data was organized into easily accessible files and a visual cataloging of the quads (quarter TM scenes) with preliminary registration with the best available charts for the region. The catalog has proved to be a valuable tool for the rapid scanning of quads for a specific investigation. Integration of the data into a complete approach to the problems of uplift, deformation, and magnetism in relation to the Nazca-South American plate interaction is at an initial stage.

  17. Colour application on mammography image segmentation

    NASA Astrophysics Data System (ADS)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  18. Report to the Congress on the Strategic Defense Initiative, 1991

    DTIC Science & Technology

    1991-05-01

    ultraviolet, and infrared radiation-hardened charge-coupled device images , step-stare sensor signal processing algorithms , and processor...Demonstration Experiment (LODE) resolved central issues associated with wavefront sensing and control and the 4-meter I Large Advanced Mirror Program (LAMP...21 Figure 4-16 Firepond CO 2 Imaging Radar Demonstration .......................... 4-22 Figure 4-17 IBSS and the Shuttle

  19. Live-cell imaging to measure BAX recruitment kinetics to mitochondria during apoptosis

    PubMed Central

    Maes, Margaret E.; Schlamp, Cassandra L.

    2017-01-01

    The pro-apoptotic BCL2 gene family member, BAX, plays a pivotal role in the intrinsic apoptotic pathway. Under cellular stress, BAX recruitment to the mitochondria occurs when activated BAX forms dimers, then oligomers, to initiate mitochondria outer membrane permeabilization (MOMP), a process critical for apoptotic progression. The activation and recruitment of BAX to form oligomers has been studied for two decades using fusion proteins with a fluorescent reporter attached in-frame to the BAX N-terminus. We applied high-speed live cell imaging to monitor the recruitment of BAX fusion proteins in dying cells. Data from time-lapse imaging was validated against the activity of endogenous BAX in cells, and analyzed using sigmoid mathematical functions to obtain detail of the kinetic parameters of the recruitment process at individual mitochondrial foci. BAX fusion proteins behave like endogenous BAX during apoptosis. Kinetic studies show that fusion protein recruitment is also minimally affected in cells lacking endogenous BAK or BAX genes, but that the kinetics are moderately, but significantly, different with different fluorescent tags in the fusion constructs. In experiments testing BAX recruitment in 3 different cell lines, our results show that regardless of cell type, once activated, BAX recruitment initiates simultaneously within a cell, but exhibits varying rates of recruitment at individual mitochondrial foci. Very early during BAX recruitment, pro-apoptotic molecules are released in the process of MOMP, but different molecules are released at different times and rates relative to the time of BAX recruitment initiation. These results provide a method for BAX kinetic analysis in living cells and yield greater detail of multiple characteristics of BAX-induced MOMP in living cells that were initially observed in cell free studies. PMID:28880942

  20. Live-cell imaging to measure BAX recruitment kinetics to mitochondria during apoptosis.

    PubMed

    Maes, Margaret E; Schlamp, Cassandra L; Nickells, Robert W

    2017-01-01

    The pro-apoptotic BCL2 gene family member, BAX, plays a pivotal role in the intrinsic apoptotic pathway. Under cellular stress, BAX recruitment to the mitochondria occurs when activated BAX forms dimers, then oligomers, to initiate mitochondria outer membrane permeabilization (MOMP), a process critical for apoptotic progression. The activation and recruitment of BAX to form oligomers has been studied for two decades using fusion proteins with a fluorescent reporter attached in-frame to the BAX N-terminus. We applied high-speed live cell imaging to monitor the recruitment of BAX fusion proteins in dying cells. Data from time-lapse imaging was validated against the activity of endogenous BAX in cells, and analyzed using sigmoid mathematical functions to obtain detail of the kinetic parameters of the recruitment process at individual mitochondrial foci. BAX fusion proteins behave like endogenous BAX during apoptosis. Kinetic studies show that fusion protein recruitment is also minimally affected in cells lacking endogenous BAK or BAX genes, but that the kinetics are moderately, but significantly, different with different fluorescent tags in the fusion constructs. In experiments testing BAX recruitment in 3 different cell lines, our results show that regardless of cell type, once activated, BAX recruitment initiates simultaneously within a cell, but exhibits varying rates of recruitment at individual mitochondrial foci. Very early during BAX recruitment, pro-apoptotic molecules are released in the process of MOMP, but different molecules are released at different times and rates relative to the time of BAX recruitment initiation. These results provide a method for BAX kinetic analysis in living cells and yield greater detail of multiple characteristics of BAX-induced MOMP in living cells that were initially observed in cell free studies.

  1. Multi-focus image fusion using a guided-filter-based difference image.

    PubMed

    Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Yang, Tingwu

    2016-03-20

    The aim of multi-focus image fusion technology is to integrate different partially focused images into one all-focused image. To realize this goal, a new multi-focus image fusion method based on a guided filter is proposed and an efficient salient feature extraction method is presented in this paper. Furthermore, feature extraction is primarily the main objective of the present work. Based on salient feature extraction, the guided filter is first used to acquire the smoothing image containing the most sharpness regions. To obtain the initial fusion map, we compose a mixed focus measure by combining the variance of image intensities and the energy of the image gradient together. Then, the initial fusion map is further processed by a morphological filter to obtain a good reprocessed fusion map. Lastly, the final fusion map is determined via the reprocessed fusion map and is optimized by a guided filter. Experimental results demonstrate that the proposed method does markedly improve the fusion performance compared to previous fusion methods and can be competitive with or even outperform state-of-the-art fusion methods in terms of both subjective visual effects and objective quality metrics.

  2. A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System

    PubMed Central

    Wu, Xiangjun; Li, Yang; Kurths, Jürgen

    2015-01-01

    The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602

  3. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  4. Real-Time Intravascular Ultrasound and Photoacoustic Imaging

    PubMed Central

    VanderLaan, Donald; Karpiouk, Andrei; Yeager, Doug; Emelianov, Stanislav

    2018-01-01

    Combined intravascular ultrasound and photoacoustic imaging (IVUS/IVPA) is an emerging hybrid modality being explored as a means of improving the characterization of atherosclerotic plaque anatomical and compositional features. While initial demonstrations of the technique have been encouraging, they have been limited by catheter rotation and data acquisition, displaying and processing rates on the order of several seconds per frame as well as the use of off-line image processing. Herein, we present a complete IVUS/IVPA imaging system and method capable of real-time IVUS/IVPA imaging, with online data acquisition, image processing and display of both IVUS and IVPA images. The integrated IVUS/IVPA catheter is fully contained within a 1 mm outer diameter torque cable coupled on the proximal end to a custom-designed spindle enabling optical and electrical coupling to system hardware, including a nanosecond-pulsed laser with a controllable pulse repetition frequency capable of greater than 10kHz, motor and servo drive, an ultrasound pulser/receiver, and a 200 MHz digitizer. The system performance is characterized and demonstrated on a vessel-mimicking phantom with an embedded coronary stent intended to provide IVPA contrast within content of an IVUS image. PMID:28092507

  5. Chromaticity based smoke removal in endoscopic images

    NASA Astrophysics Data System (ADS)

    Tchaka, Kevin; Pawar, Vijay M.; Stoyanov, Danail

    2017-02-01

    In minimally invasive surgery, image quality is a critical pre-requisite to ensure a surgeons ability to perform a procedure. In endoscopic procedures, image quality can deteriorate for a number of reasons such as fogging due to the temperature gradient after intra-corporeal insertion, lack of focus and due to smoke generated when using electro-cautery to dissect tissues without bleeding. In this paper we investigate the use of vision processing techniques to remove surgical smoke and improve the clarity of the image. We model the image formation process by introducing a haze medium to account for the degradation of visibility. For simplicity and computational efficiency we use an adapted dark-channel prior method combined with histogram equalization to remove smoke artifacts to recover the radiance image and enhance the contrast and brightness of the final result. Our initial results on images from robotic assisted procedures are promising and show that the proposed approach may be used to enhance image quality during surgery without additional suction devices. In addition, the processing pipeline may be used as an important part of a robust surgical vision pipeline that can continue working in the presence of smoke.

  6. Contribution of non-negative matrix factorization to the classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  7. Artificial intelligence and signal processing for infrastructure assessment

    NASA Astrophysics Data System (ADS)

    Assaleh, Khaled; Shanableh, Tamer; Yehia, Sherif

    2015-04-01

    The Ground Penetrating Radar (GPR) is being recognized as an effective nondestructive evaluation technique to improve the inspection process. However, data interpretation and complexity of the results impose some limitations on the practicality of using this technique. This is mainly due to the need of a trained experienced person to interpret images obtained by the GPR system. In this paper, an algorithm to classify and assess the condition of infrastructures utilizing image processing and pattern recognition techniques is discussed. Features extracted form a dataset of images of defected and healthy slabs are used to train a computer vision based system while another dataset is used to evaluate the proposed algorithm. Initial results show that the proposed algorithm is able to detect the existence of defects with about 77% success rate.

  8. Processing system of jaws tomograms for pathology identification and surgical guide modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putrik, M. B., E-mail: pmb-88@mail.ru; Ivanov, V. Yu.; Lavrentyeva, Yu. E.

    The aim of the study is to create an image processing system, which allows dentists to find pathological resorption and to build surgical guide surface automatically. X-rays images of jaws from cone beam tomography or spiral computed tomography are the initial data for processing. One patient’s examination always includes up to 600 images (or tomograms), that’s why the development of processing system for fast automation search of pathologies is necessary. X-rays images can be useful not for only illness diagnostic but for treatment planning too. We have studied the case of dental implantation – for successful surgical manipulations surgical guidesmore » are used. We have created a processing system that automatically builds jaw and teeth boundaries on the x-ray image. After this step, obtained teeth boundaries used for surgical guide surface modeling and jaw boundaries limit the area for further pathologies search. Criterion for the presence of pathological resorption zones inside the limited area is based on statistical investigation. After described actions, it is possible to manufacture surgical guide using 3D printer and apply it in surgical operation.« less

  9. Fast evolution of image manifolds and application to filtering and segmentation in 3D medical images.

    PubMed

    Deschamps, Thomas; Malladi, Ravi; Ravve, Igor

    2004-01-01

    In many instances, numerical integration of space-scale PDEs is the most time consuming operation of image processing. This is because the scale step is limited by conditional stability of explicit schemes. In this work, we introduce the unconditionally stable semi-implicit linearized difference scheme that is fashioned after additive operator split (AOS) [1], [2] for Beltrami and the subjective surface computation. The Beltrami flow [3], [4], [5] is one of the most effective denoising algorithms in image processing. For gray-level images, we show that the flow equation can be arranged in an advection-diffusion form, revealing the edge-enhancing properties of this flow. This also suggests the application of AOS method for faster convergence. The subjective surface [6] deals with constructing a perceptually meaningful interpretation from partial image data by mimicking the human visual system. However, initialization of the surface is critical for the final result and its main drawbacks are very slow convergence and the huge number of iterations required. In this paper, we first show that the governing equation for the subjective surface flow can be rearranged in an AOS implementation, providing a near real-time solution to the shape completion problem in 2D and 3D. Then, we devise a new initialization paradigm where we first "condition" the viewpoint surface using the Fast-Marching algorithm. We compare the original method with our new algorithm on several examples of real 3D medical images, thus revealing the improvement achieved.

  10. MS lesion segmentation using a multi-channel patch-based approach with spatial consistency

    NASA Astrophysics Data System (ADS)

    Mechrez, Roey; Goldberger, Jacob; Greenspan, Hayit

    2015-03-01

    This paper presents an automatic method for segmentation of Multiple Sclerosis (MS) in Magnetic Resonance Images (MRI) of the brain. The approach is based on similarities between multi-channel patches (T1, T2 and FLAIR). An MS lesion patch database is built using training images for which the label maps are known. For each patch in the testing image, k similar patches are retrieved from the database. The matching labels for these k patches are then combined to produce an initial segmentation map for the test case. Finally a novel iterative patch-based label refinement process based on the initial segmentation map is performed to ensure spatial consistency of the detected lesions. A leave-one-out evaluation is done for each testing image in the MS lesion segmentation challenge of MICCAI 2008. Results are shown to compete with the state-of-the-art methods on the MICCAI 2008 challenge.

  11. A Web simulation of medical image reconstruction and processing as an educational tool.

    PubMed

    Papamichail, Dimitrios; Pantelis, Evaggelos; Papagiannis, Panagiotis; Karaiskos, Pantelis; Georgiou, Evangelos

    2015-02-01

    Web educational resources integrating interactive simulation tools provide students with an in-depth understanding of the medical imaging process. The aim of this work was the development of a purely Web-based, open access, interactive application, as an ancillary learning tool in graduate and postgraduate medical imaging education, including a systematic evaluation of learning effectiveness. The pedagogic content of the educational Web portal was designed to cover the basic concepts of medical imaging reconstruction and processing, through the use of active learning and motivation, including learning simulations that closely resemble actual tomographic imaging systems. The user can implement image reconstruction and processing algorithms under a single user interface and manipulate various factors to understand the impact on image appearance. A questionnaire for pre- and post-training self-assessment was developed and integrated in the online application. The developed Web-based educational application introduces the trainee in the basic concepts of imaging through textual and graphical information and proceeds with a learning-by-doing approach. Trainees are encouraged to participate in a pre- and post-training questionnaire to assess their knowledge gain. An initial feedback from a group of graduate medical students showed that the developed course was considered as effective and well structured. An e-learning application on medical imaging integrating interactive simulation tools was developed and assessed in our institution.

  12. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  13. CAD/CAM guided surgery in implant dentistry. A review of software packages and step-by-step protocols for planning surgical guides.

    PubMed

    Scherer, Michael D; Kattadiyil, Mathew T; Parciak, Ewa; Puri, Shweta

    2014-01-01

    Three-dimensional radiographic imaging for dental implant treatment planning is gaining widespread interest and popularity. However, application of the data from 30 imaging can be a complex and daunting process initially. The purpose of this article is to describe features of three software packages and the respective computerized guided surgical templates (GST) fabricated from them. A step-by-step method of interpreting and ordering a GST to simplify the process of the surgical planning and implant placement is discussed.

  14. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications

    PubMed Central

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation. PMID:26958441

  15. The Impact of a New Speckle Holography Analysis on the Galactic Center Orbits Initiative

    NASA Astrophysics Data System (ADS)

    Mangian, John; Ghez, Andrea; Gautam, Abhimat; Gallego, Laly; Schödel, Rainer; Lu, Jessica; Chen, Zhuo; UCLA Galactic Center Group; W.M. Keck Observatory Staff

    2018-01-01

    The Galactic Center Orbit Initiative has used two decades of high angular resolution imaging data from the W. M. Keck Observatory to make astrometric measurements of stellar motion around our Galaxy's central supermassive black hole. We present an analysis of a new approach to ten years of speckle imaging data (1995 - 2005) that has been processed with a new holography analysis. This analysis has (1) improved the image quality near the edge of the combined speckle frame and (2) increased the depth of the images and therefore increased the number of sources detected throughout the entire image. By directly comparing each holography analysis, we find a 41% increase in total detected sources and a 81% increase in sources further than 3" from the central black hole (SgrA*). Further, we find a 49% increase in sources of K-band magnitude greater than the old holography limiting magnitude due to the reduction of light halos surrounding bright sources.

  16. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications.

    PubMed

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation.

  17. IEEE International Symposium on Biomedical Imaging.

    PubMed

    2017-01-01

    The IEEE International Symposium on Biomedical Imaging (ISBI) is a scientific conference dedicated to mathematical, algorithmic, and computational aspects of biological and biomedical imaging, across all scales of observation. It fosters knowledge transfer among different imaging communities and contributes to an integrative approach to biomedical imaging. ISBI is a joint initiative from the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS). The 2018 meeting will include tutorials, and a scientific program composed of plenary talks, invited special sessions, challenges, as well as oral and poster presentations of peer-reviewed papers. High-quality papers are requested containing original contributions to the topics of interest including image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological, and statistical modeling. Accepted 4-page regular papers will be published in the symposium proceedings published by IEEE and included in IEEE Xplore. To encourage attendance by a broader audience of imaging scientists and offer additional presentation opportunities, ISBI 2018 will continue to have a second track featuring posters selected from 1-page abstract submissions without subsequent archival publication.

  18. The Vector, Signal, and Image Processing Library (VSIPL): an Open Standard for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.

    1999-12-01

    The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.

  19. Enhancing the far-ultraviolet sensitivity of silicon complementary metal oxide semiconductor imaging arrays

    NASA Astrophysics Data System (ADS)

    Retherford, Kurt D.; Bai, Yibin; Ryu, Kevin K.; Gregory, James A.; Welander, Paul B.; Davis, Michael W.; Greathouse, Thomas K.; Winters, Gregory S.; Suntharalingam, Vyshnavi; Beletic, James W.

    2015-10-01

    We report our progress toward optimizing backside-illuminated silicon P-type intrinsic N-type complementary metal oxide semiconductor devices developed by Teledyne Imaging Sensors (TIS) for far-ultraviolet (UV) planetary science applications. This project was motivated by initial measurements at Southwest Research Institute of the far-UV responsivity of backside-illuminated silicon PIN photodiode test structures, which revealed a promising QE in the 100 to 200 nm range. Our effort to advance the capabilities of thinned silicon wafers capitalizes on recent innovations in molecular beam epitaxy (MBE) doping processes. Key achievements to date include the following: (1) representative silicon test wafers were fabricated by TIS, and set up for MBE processing at MIT Lincoln Laboratory; (2) preliminary far-UV detector QE simulation runs were completed to aid MBE layer design; (3) detector fabrication was completed through the pre-MBE step; and (4) initial testing of the MBE doping process was performed on monitoring wafers, with detailed quality assessments.

  20. INITIATION PROCESSES FOR THE 2013 MAY 13 X1.7 LIMB FLARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Jinhua; Wang, Ya; Zhou, Tuanhui

    2017-01-20

    For the X1.7 class flare on 2013 May 13 (SOL2013-05-13T01:53), its initiation process was well observed by the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamic Observatory and the Extreme UltraViolet Imager (EUVI) on board STEREO-B . The initiation process incorporates the following phenomena: an X-ray precursor that started ∼9 minutes before flare onset, two hot magnetic loops (as seen with AIA hot channels) forming a sigmoidal core magnetic structure (as seen with the EUVI), a rapidly formed magnetic flux rope (MFR) that expands outward, and a flare loop that contracts inward. The two hot magnetic loops were activatedmore » after the occurrence of the X-ray precursor. After activation, magnetic reconnection occurred between the two hot magnetic loops (inside the sigmoid structure), which produced the expanding MFR and the contracting flare loop (CFL). The MFR and CFL can only be seen with AIA hot and cool channels, respectively. For this flare, the real initiation time can be regarded as being from the starting time of the precursor, and its impulsive phase started when the MFR began its fast expansion. In addition, the CFL and the growing postflare magnetic loops are different loop systems, and the CFL was the product of magnetic reconnection between sheared magnetic fields that also produced the MFR.« less

  1. A new template matching method based on contour information

    NASA Astrophysics Data System (ADS)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.

  2. Digital image processing for information extraction.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1973-01-01

    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  3. Bouguer images of the North American craton and its structural evolution

    NASA Technical Reports Server (NTRS)

    Arvidson, R. E.; Bowring, S.; Eddy, M.; Guinness, E.; Leff, C.; Bindschadler, D.

    1984-01-01

    Digital image processing techniques have been used to generate Bouguer images of the North American craton that diplay more of the granularity inherent in the data as compared with existing contour maps. A dominant NW-SE linear trend of highs and lows can be seen extending from South Dakota, through Nebraska, and into Missouri. The structural trend cuts across the major Precambrian boundary in Missouri, separating younger granites and rhyolites from older sheared granites and gneisses. This trend is probably related to features created during an early and perhaps initial episode of crustal assembly by collisional processes. The younger granitic materials are probably a thin cover over an older crust.

  4. Land classification of south-central Iowa from computer enhanced images

    NASA Technical Reports Server (NTRS)

    Lucas, J. R.; Taranik, J. V.; Billingsley, F. C. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. Enhanced LANDSAT imagery was most useful for land classification purposes, because these images could be photographically printed at large scales such as 1:63,360. The ability to see individual picture elements was no hindrance as long as general image patterns could be discerned. Low cost photographic processing systems for color printings have proved to be effective in the utilization of computer enhanced LANDSAT products for land classification purposes. The initial investment for this type of system was very low, ranging from $100 to $200 beyond a black and white photo lab. The technical expertise can be acquired from reading a color printing and processing manual.

  5. Rapid trench initiated recrystallization and stagnation in narrow Cu interconnect lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Brendan B.; Rizzolo, Michael; Prestowitz, Luke C.

    2015-10-26

    Understanding and ultimately controlling the self-annealing of Cu in narrow interconnect lines has remained a top priority in order to continue down-scaling of back-end of the line interconnects. Recently, it was hypothesized that a bottom-up microstructural transformation process in narrow interconnect features competes with the surface-initiated overburden transformation. Here, a set of transmission electron microscopy images which captures the grain coarsening process in 48 nm lines in a time resolved manner is presented, supporting such a process. Grain size measurements taken from these images have demonstrated that the Cu microstructural transformation in 48 nm interconnect lines stagnates after only 1.5 h atmore » room temperature. This stubborn metastable structure remains stagnant, even after aggressive elevated temperature anneals, suggesting that a limited internal energy source such as dislocation content is driving the transformation. As indicated by the extremely low defect density found in 48 nm trenches, a rapid recrystallization process driven by annihilation of defects in the trenches appears to give way to a metastable microstructure in the trenches.« less

  6. From nociception to pain perception: imaging the spinal and supraspinal pathways

    PubMed Central

    Brooks, Jonathan; Tracey, Irene

    2005-01-01

    Functional imaging techniques have allowed researchers to look within the brain, and revealed the cortical representation of pain. Initial experiments, performed in the early 1990s, revolutionized pain research, as they demonstrated that pain was not processed in a single cortical area, but in several distributed brain regions. Over the last decade, the roles of these pain centres have been investigated and a clearer picture has emerged of the medial and lateral pain system. In this brief article, we review the imaging literature to date that has allowed these advances to be made, and examine the new frontiers for pain imaging research: imaging the brainstem and other structures involved in the descending control of pain; functional and anatomical connectivity studies of pain processing brain regions; imaging models of neuropathic pain-like states; and going beyond the brain to image spinal function. The ultimate goal of such research is to take these new techniques into the clinic, to investigate and provide new remedies for chronic pain sufferers. PMID:16011543

  7. Road extraction from aerial images using a region competition algorithm.

    PubMed

    Amo, Miriam; Martínez, Fernando; Torre, Margarita

    2006-05-01

    In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.

  8. a Fast Approach for Stitching of Aerial Images

    NASA Astrophysics Data System (ADS)

    Moussa, A.; El-Sheimy, N.

    2016-06-01

    The last few years have witnessed an increasing volume of aerial image data because of the extensive improvements of the Unmanned Aerial Vehicles (UAVs). These newly developed UAVs have led to a wide variety of applications. A fast assessment of the achieved coverage and overlap of the acquired images of a UAV flight mission is of great help to save the time and cost of the further steps. A fast automatic stitching of the acquired images can help to visually assess the achieved coverage and overlap during the flight mission. This paper proposes an automatic image stitching approach that creates a single overview stitched image using the acquired images during a UAV flight mission along with a coverage image that represents the count of overlaps between the acquired images. The main challenge of such task is the huge number of images that are typically involved in such scenarios. A short flight mission with image acquisition frequency of one second can capture hundreds to thousands of images. The main focus of the proposed approach is to reduce the processing time of the image stitching procedure by exploiting the initial knowledge about the images positions provided by the navigation sensors. The proposed approach also avoids solving for all the transformation parameters of all the photos together to save the expected long computation time if all the parameters were considered simultaneously. After extracting the points of interest of all the involved images using Scale-Invariant Feature Transform (SIFT) algorithm, the proposed approach uses the initial image's coordinates to build an incremental constrained Delaunay triangulation that represents the neighborhood of each image. This triangulation helps to match only the neighbor images and therefore reduces the time-consuming features matching step. The estimated relative orientation between the matched images is used to find a candidate seed image for the stitching process. The pre-estimated transformation parameters of the images are employed successively in a growing fashion to create the stitched image and the coverage image. The proposed approach is implemented and tested using the images acquired through a UAV flight mission and the achieved results are presented and discussed.

  9. Visual grading analysis of digital neonatal chest phantom X-ray images: Impact of detector type, dose and image processing on image quality.

    PubMed

    Smet, M H; Breysem, L; Mussen, E; Bosmans, H; Marshall, N W; Cockmartin, L

    2018-07-01

    To evaluate the impact of digital detector, dose level and post-processing on neonatal chest phantom X-ray image quality (IQ). A neonatal phantom was imaged using four different detectors: a CR powder phosphor (PIP), a CR needle phosphor (NIP) and two wireless CsI DR detectors (DXD and DRX). Five different dose levels were studied for each detector and two post-processing algorithms evaluated for each vendor. Three paediatric radiologists scored the images using European quality criteria plus additional questions on vascular lines, noise and disease simulation. Visual grading characteristics and ordinal regression statistics were used to evaluate the effect of detector type, post-processing and dose on VGA score (VGAS). No significant differences were found between the NIP, DXD and CRX detectors (p>0.05) whereas the PIP detector had significantly lower VGAS (p< 0.0001). Processing did not influence VGAS (p=0.819). Increasing dose resulted in significantly higher VGAS (p<0.0001). Visual grading analysis (VGA) identified a detector air kerma/image (DAK/image) of ~2.4 μGy as an ideal working point for NIP, DXD and DRX detectors. VGAS tracked IQ differences between detectors and dose levels but not image post-processing changes. VGA showed a DAK/image value above which perceived IQ did not improve, potentially useful for commissioning. • A VGA study detects IQ differences between detectors and dose levels. • The NIP detector matched the VGAS of the CsI DR detectors. • VGA data are useful in setting initial detector air kerma level. • Differences in NNPS were consistent with changes in VGAS.

  10. Vicinal light inspection of translucent materials

    DOEpatents

    Burns, Geroge R [Albuquerque, NM; Yang, Pin [Albuquerque, NM

    2010-01-19

    The present invention includes methods and apparatus for inspecting vicinally illuminated non-patterned areas of translucent materials. An initial image of the material is received. A second image is received following a relative translation between the material being inspected and a device generating the images. Each vicinally illuminated image includes a portion having optimal illumination, that can be extracted and stored in a composite image of the non-patterned area. The composite image includes aligned portions of the extracted image portions, and provides a composite having optimal illumination over a non-patterned area of the material to be inspected. The composite image can be processed by enhancement and object detection algorithms, to determine the presence of, and characterize any inhomogeneities present in the material.

  11. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  12. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    PubMed

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  13. The development of a digitising service centre for natural history collections

    PubMed Central

    Tegelberg, Riitta; Haapala, Jaana; Mononen, Tero; Pajari, Mika; Saarenmaa, Hannu

    2012-01-01

    Abstract Digitarium is a joint initiative of the Finnish Museum of Natural History and the University of Eastern Finland. It was established in 2010 as a dedicated shop for the large-scale digitisation of natural history collections. Digitarium offers service packages based on the digitisation process, including tagging, imaging, data entry, georeferencing, filtering, and validation. During the process, all specimens are imaged, and distance workers take care of the data entry from the images. The customer receives the data in Darwin Core Archive format, as well as images of the specimens and their labels. Digitarium also offers the option of publishing images through Morphbank, sharing data through GBIF, and archiving data for long-term storage. Service packages can also be designed on demand to respond to the specific needs of the customer. The paper also discusses logistics, costs, and intellectual property rights (IPR) issues related to the work that Digitarium undertakes. PMID:22859879

  14. High resolution imaging of latent fingerprints by localized corrosion on brass surfaces.

    PubMed

    Goddard, Alex J; Hillman, A Robert; Bond, John W

    2010-01-01

    The Atomic Force Microscope (AFM) is capable of imaging fingerprint ridges on polished brass substrates at an unprecedented level of detail. While exposure to elevated humidity at ambient or slightly raised temperatures does not change the image appreciably, subsequent brief heating in a flame results in complete loss of the sweat deposit and the appearance of pits and trenches. Localized elemental analysis (using EDAX, coupled with SEM imaging) shows the presence of the constituents of salt in the initial deposits. Together with water and atmospheric oxygen--and with thermal enhancement--these are capable of driving a surface corrosion process. This process is sufficiently localized that it has the potential to generate a durable negative topographical image of the fingerprint. AFM examination of surface regions between ridges revealed small deposits (probably microscopic "spatter" of sweat components or transferred particulates) that may ultimately limit the level of ridge detail analysis.

  15. Image Encryption Algorithm Based on Hyperchaotic Maps and Nucleotide Sequences Database

    PubMed Central

    2017-01-01

    Image encryption technology is one of the main means to ensure the safety of image information. Using the characteristics of chaos, such as randomness, regularity, ergodicity, and initial value sensitiveness, combined with the unique space conformation of DNA molecules and their unique information storage and processing ability, an efficient method for image encryption based on the chaos theory and a DNA sequence database is proposed. In this paper, digital image encryption employs a process of transforming the image pixel gray value by using chaotic sequence scrambling image pixel location and establishing superchaotic mapping, which maps quaternary sequences and DNA sequences, and by combining with the logic of the transformation between DNA sequences. The bases are replaced under the displaced rules by using DNA coding in a certain number of iterations that are based on the enhanced quaternary hyperchaotic sequence; the sequence is generated by Chen chaos. The cipher feedback mode and chaos iteration are employed in the encryption process to enhance the confusion and diffusion properties of the algorithm. Theoretical analysis and experimental results show that the proposed scheme not only demonstrates excellent encryption but also effectively resists chosen-plaintext attack, statistical attack, and differential attack. PMID:28392799

  16. Stereo matching and view interpolation based on image domain triangulation.

    PubMed

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  17. A Spatiotemporal Clustering Approach to Maritime Domain Awareness

    DTIC Science & Technology

    2013-09-01

    1997. [25] M. E. Celebi, “Effective initialization of k-means for color quantization,” 16th IEEE International Conference on Image Processing (ICIP...release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Spatiotemporal clustering is the process of grouping...Department of Electrical and Computer Engineering iv THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT Spatiotemporal clustering is the process of

  18. US of Right Upper Quadrant Pain in the Emergency Department: Diagnosing beyond Gallbladder and Biliary Disease.

    PubMed

    Joshi, Gayatri; Crawford, Kevin A; Hanna, Tarek N; Herr, Keith D; Dahiya, Nirvikar; Menias, Christine O

    2018-01-01

    Acute cholecystitis is the most common diagnosable cause for right upper quadrant abdominal (RUQ) pain in patients who present to the emergency department (ED). However, over one-third of patients initially thought to have acute cholecystitis actually have RUQ pain attributable to other causes. Ultrasonography (US) is the primary imaging modality of choice for initial imaging assessment and serves as a fast, cost-effective, and dynamic modality to provide a definitive diagnosis or a considerably narrowed list of differential possibilities. Multiple organ systems are included at standard RUQ US, and a variety of ultrasonographically diagnosable disease processes can be identified, including conditions of hepatic, pancreatic, adrenal, renal, gastrointestinal, vascular, and thoracic origin, all of which may result in RUQ pain. In certain cases, subsequent computed tomography, magnetic resonance (MR) imaging, MR cholangiopancreatography, or cholescintigraphy may be considered, depending on the clinical situation and US findings. Familiarity with the spectrum of disease processes outside of the gallbladder and biliary tree that may manifest with RUQ pain and recognition at US of these alternative conditions is pivotal for early diagnosis and appropriate management. Diagnosis at the time of initial US can reduce unnecessary imaging and its consequences, including excess cost, radiation exposure, nephrotoxic contrast medium use, and time to diagnosis, thereby translating into improved patient care and outcome. This article (a) reviews the causes of RUQ pain identifiable at US using an organ-system approach, (b) illustrates the US appearance of select conditions from each organ system with multimodality imaging correlates, and (c) discusses the relevant pathophysiology and treatment of these entities to aid in efficient direction of management. Online supplemental material is available for this article. © RSNA, 2018.

  19. Deep learning classifier with optical coherence tomography images for early dental caries detection

    NASA Astrophysics Data System (ADS)

    Karimian, Nima; Salehi, Hassan S.; Mahdian, Mina; Alnajjar, Hisham; Tadinada, Aditya

    2018-02-01

    Dental caries is a microbial disease that results in localized dissolution of the mineral content of dental tissue. Despite considerable decline in the incidence of dental caries, it remains a major health problem in many societies. Early detection of incipient lesions at initial stages of demineralization can result in the implementation of non-surgical preventive approaches to reverse the demineralization process. In this paper, we present a novel approach combining deep convolutional neural networks (CNN) and optical coherence tomography (OCT) imaging modality for classification of human oral tissues to detect early dental caries. OCT images of oral tissues with various densities were input to a CNN classifier to determine variations in tissue densities resembling the demineralization process. The CNN automatically learns a hierarchy of increasingly complex features and a related classifier directly from training data sets. The initial CNN layer parameters were randomly selected. The training set is split into minibatches, with 10 OCT images per batch. Given a batch of training patches, the CNN employs two convolutional and pooling layers to extract features and then classify each patch based on the probabilities from the SoftMax classification layer (output-layer). Afterward, the CNN calculates the error between the classification result and the reference label, and then utilizes the backpropagation process to fine-tune all the layer parameters to minimize this error using batch gradient descent algorithm. We validated our proposed technique on ex-vivo OCT images of human oral tissues (enamel, cortical-bone, trabecular-bone, muscular-tissue, and fatty-tissue), which attested to effectiveness of our proposed method.

  20. Fish freshness estimation using eye image processing under white and UV lightings

    NASA Astrophysics Data System (ADS)

    Kanamori, Katsuhiro; Shirataki, Yuri; Liao, Qiuhong; Ogawa, Yuichi; Suzuki, Tetsuhito; Kondo, Naoshi

    2017-05-01

    A non-destructive method of estimating the freshness of fish is required for appropriate price setting and food safety. In particular, for determining the possibility of eating raw fish (sashimi), freshness estimation is critical. We studied such an estimation method by capturing images of fish eyes and performing image processing using the temporal changes of the luminance of pupil and iris. To detect subtle non-visible changes of these features, we used UV (375 nm) light illumination in addition to visible white light illumination. Polarization and two-channel LED techniques were used to remove strong specular reflection from the cornea of the eye and from clear-plastic wrap used to cover the fish to maintain humidity. Pupil and iris regions were automatically detected separately by image processing after the specular reflection removal process, and two types of eye contrast were defined as the ratio of mean and median pixel values of each region. Experiments using 16 Japanese dace (Tribolodon hakonensis) at 23° and 85% humidity for 24 hours were performed. The eye contrast of raw fish increase non-linearly in the initial period and then decreased; however, that of frozen-thawed fish decreased linearly throughout 24 hours, regardless of the lighting. Interestingly, the eye contrast using UV light showed a higher correlation with time than that using white light only in the case of raw fish within the early 6- hour period postmortem. These results show the possibility of estimating fish freshness in the initial stage when fish are eaten raw using white and UV lightings.

  1. A cost analysis comparing xeroradiography to film technics for intraoral radiography.

    PubMed

    Gratt, B M; Sickles, E A

    1986-01-01

    In the United States during 1978 $730 million was spent on dental radiographic services. Currently there are three alternatives for the processing of intraoral radiographs: manual wet-tanks, automatic film units, or xeroradiography. It was the intent of this study to determine which processing system is the most economical. Cost estimates were based on a usage rate of 750 patient images per month and included a calculation of the average cost per radiograph over a five-year period. Capital costs included initial processing equipment and site preparation. Operational costs included labor, supplies, utilities, darkroom rental, and breakdown costs. Clinical time trials were employed to measure examination times. Maintenance logs were employed to assess labor costs. Indirect costs of training were estimated. Results indicated that xeroradiography was the most cost effective ($0.81 per image) compared to either automatic film processing ($1.14 per image) or manual processing ($1.35 per image). Variations in projected costs indicated that if a dental practice performs primarily complete-mouth surveys, exposes less than 120 radiographs per month, and pays less than +6.50 per hour in wages, then manual (wet-tank) processing is the most economical method for producing intraoral radiographs.

  2. Changing job seekers' image perceptions during recruitment visits: the moderating role of belief confidence.

    PubMed

    Slaughter, Jerel E; Cable, Daniel M; Turban, Daniel B

    2014-11-01

    The purpose of this study was to understand how an important construct in social psychology-confidence in one's beliefs-could both (a) influence the effectiveness of organizations' recruiting processes and (b) be changed during recruitment. Using a sample of recruits to a branch of the United States military, the authors studied belief confidence before and after recruits' formal visits to the organization's recruiting stations. Personal sources of information had a stronger influence on recruits' belief confidence than impersonal sources. Moreover, recruits' confidence in their initial beliefs affected how perceptions of the recruiter changed their employer images. Among participants with low-initial confidence, the relation between recruitment experiences and employer images was positive and linear across the whole range of recruitment experiences. Among recruits with high-initial confidence, however, the recruitment experience-image relationship was curvilinear, such that recruitment experiences were related to images only at more positive recruitment experiences. The relationship between recruitment experiences and changes in belief confidence was also curvilinear, such that only more positive recruitment experiences led to changes in confidence. These results indicate not only that belief confidence influences the effectiveness of recruiting efforts but also that recruiting efforts can influence belief confidence. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  3. 3D/2D image registration using weighted histogram of gradient directions

    NASA Astrophysics Data System (ADS)

    Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang

    2015-03-01

    Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.

  4. Response of the Pre-Oriented Goal-Directed Attention to Usual and Unusual Distractors: A Preliminary Study

    PubMed Central

    Baghdadi, Golnaz; Towhidkhah, Farzad; Rostami, Reza; Raza, Mohsen

    2017-01-01

    Introduction: In this study, we investigated the distraction power of the unusual and usual images on the attention of 20 healthy primary school children. Methods: Our study was different from previous ones in that the participants were asked to fix the initial position of their attention on a predefined location after being presented with unusual images as distractors. The goals were presented in locations, which were far from the attraction basin of distractors. We expected that the pre-orienting of the attention to the position of targets would reduce the attractive effect of unusual images compared to the usual ones. The percentage of correct responses and the reaction time were measured as behavioral indicators of attention performance. Results: Results showed that using the goal-directed attention, subjects ignored both kinds of distractors nearly the same way. Conclusion: With regard to previous reports about more attraction towards the unusual images, it is suggested that the dynamics of the visual attention system be sensitive to the initial condition. That is, changing the initial position of the attention can lead to the decrement of the unusual images effects. However, several other possibilities such as a probable delay in processing unusual features could explain this observation, too. PMID:28540000

  5. Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2015-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.

  6. Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features

    PubMed Central

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2017-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329

  7. MOSAIC - A space-multiplexing technique for optical processing of large images

    NASA Technical Reports Server (NTRS)

    Athale, Ravindra A.; Astor, Michael E.; Yu, Jeffrey

    1993-01-01

    A technique for Fourier processing of images larger than the space-bandwidth products of conventional or smart spatial light modulators and two-dimensional detector arrays is described. The technique involves a spatial combination of subimages displayed on individual spatial light modulators to form a phase-coherent image, which is subsequently processed with Fourier optical techniques. Because of the technique's similarity with the mosaic technique used in art, the processor used is termed an optical MOSAIC processor. The phase accuracy requirements of this system were studied by computer simulation. It was found that phase errors of less than lambda/8 did not degrade the performance of the system and that the system was relatively insensitive to amplitude nonuniformities. Several schemes for implementing the subimage combination are described. Initial experimental results demonstrating the validity of the mosaic concept are also presented.

  8. Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view

    NASA Astrophysics Data System (ADS)

    Cao, Tam P.; Deng, Guang; Elton, Darrell

    2009-02-01

    In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.

  9. Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions

    PubMed Central

    Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.

    2010-01-01

    Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256

  10. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

  11. Automatic localization of landmark sets in head CT images with regression forests for image registration initialization

    NASA Astrophysics Data System (ADS)

    Zhang, Dongqing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.

    2016-03-01

    Cochlear Implants (CIs) are electrode arrays that are surgically inserted into the cochlea. Individual contacts stimulate frequency-mapped nerve endings thus replacing the natural electro-mechanical transduction mechanism. CIs are programmed post-operatively by audiologists but this is currently done using behavioral tests without imaging information that permits relating electrode position to inner ear anatomy. We have recently developed a series of image processing steps that permit the segmentation of the inner ear anatomy and the localization of individual contacts. We have proposed a new programming strategy that uses this information and we have shown in a study with 68 participants that 78% of long term recipients preferred the programming parameters determined with this new strategy. A limiting factor to the large scale evaluation and deployment of our technique is the amount of user interaction still required in some of the steps used in our sequence of image processing algorithms. One such step is the rough registration of an atlas to target volumes prior to the use of automated intensity-based algorithms when the target volumes have very different fields of view and orientations. In this paper we propose a solution to this problem. It relies on a random forest-based approach to automatically localize a series of landmarks. Our results obtained from 83 images with 132 registration tasks show that automatic initialization of an intensity-based algorithm proves to be a reliable technique to replace the manual step.

  12. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  13. Validation of a rapid, semiautomatic image analysis tool for measurement of gastric accommodation and emptying by magnetic resonance imaging

    PubMed Central

    Dixit, Sudeepa; Fox, Mark; Pal, Anupam

    2014-01-01

    Magnetic resonance imaging (MRI) has advantages for the assessment of gastrointestinal structures and functions; however, processing MRI data is time consuming and this has limited uptake to a few specialist centers. This study introduces a semiautomatic image processing system for rapid analysis of gastrointestinal MRI. For assessment of simpler regions of interest (ROI) such as the stomach, the system generates virtual images along arbitrary planes that intersect the ROI edges in the original images. This generates seed points that are joined automatically to form contours on each adjacent two-dimensional image and reconstructed in three dimensions (3D). An alternative thresholding approach is available for rapid assessment of complex structures like the small intestine. For assessment of dynamic gastrointestinal function, such as gastric accommodation and emptying, the initial 3D reconstruction is used as reference to process adjacent image stacks automatically. This generates four-dimensional (4D) reconstructions of dynamic volume change over time. Compared with manual processing, this semiautomatic system reduced the user input required to analyze a MRI gastric emptying study (estimated 100 vs. 10,000 mouse clicks). This analysis was not subject to variation in volume measurements seen between three human observers. In conclusion, the image processing platform presented processed large volumes of MRI data, such as that produced by gastric accommodation and emptying studies, with minimal user input. 3D and 4D reconstructions of the stomach and, potentially, other gastrointestinal organs are produced faster and more accurately than manual methods. This system will facilitate the application of MRI in gastrointestinal research and clinical practice. PMID:25540229

  14. Neutron Imaging for Selective Laser Melting Inconel Hardware with Internal Passages

    NASA Technical Reports Server (NTRS)

    Tramel, Terri L.; Norwood, Joseph K.; Bilheux, Hassina

    2014-01-01

    Additive Manufacturing is showing great promise for the development of new innovative designs and large potential life cycle cost reduction for the Aerospace Industry. However, more development work is required to move this technology into space flight hardware production. With selective laser melting (SLM), hardware that once consisted of multiple, carefully machined and inspected pieces, joined together can be made in one part. However standard inspection techniques cannot be used to verify that the internal passages are within dimensional tolerances or surface finish requirements. NASA/MSFC traveled to Oak Ridge National Lab's (ORNL) Spallation Neutron Source to perform some non-destructive, proof of concept imaging measurements to assess the capabilities to understand internal dimensional tolerances and internal passages surface roughness. This presentation will describe 1) the goals of this proof of concept testing, 2) the lessons learned when designing and building these Inconel 718 test specimens to minimize beam time, 3) the neutron imaging test setup and test procedure to get the images, 4) the initial results in images, volume and a video, 4) the assessment of using this imaging technique to gather real data for designing internal flow passages in SLM manufacturing aerospace hardware, and lastly 5) how proper cleaning of the internal passages is critically important. In summary, the initial results are very promising and continued development of a technique to assist in SLM development for aerospace components is desired by both NASA and ORNL. A plan forward that benefits both ORNL and NASA will also be presented, based on the promising initial results. The initial images and volume reconstruction showed that clean, clear images of the internal passages geometry are obtainable. These clear images of the internal passages of simple geometries will be compared to the build model to determine any differences. One surprising result was that a new cleaning process was used on these simply geometric specimens that resulted in what appears to be very smooth internal surfaces, when compared to other aerospace hardware cleaning methods.

  15. Development of a personalized training system using the Lung Image Database Consortium and Image Database resource Initiative Database.

    PubMed

    Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong

    2014-12-01

    The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  16. Tactical Satellite 3

    NASA Astrophysics Data System (ADS)

    Davis, T. M.; Straight, S. D.; Lockwook, R. B.

    2008-08-01

    Tactical Satellite 3 is an Air Force Research Laboratory Science and Technology (S&T) initiative that explores the capability and technological maturity of small, low-cost satellites. It features a low cost "plug and play" modular bus and low cost militarily significant payloads - a Raytheon developed Hyperspectral imager and secondary payload data exfiltration provided by the Office of Naval Research. In addition to providing for ongoing innovation and demonstration in this important technology area, these S&T efforts also help mitigate technology risk and establish a potential concept of operations for future acquisitions. The key objectives are rapid launch and on-orbit checkout, theater commanding, and near-real time theater data integration. It will also feature a rapid development of the space vehicle and integrated payload and spacecraft bus by using components and processes developed by the satellite modular bus initiative. Planned for a late summer 2008 launch, the TacSat-3 spacecraft will collect and process images and then downlink processed data using a Common Data Link. An in-theater tactical ground station will have the capability to uplink tasking to spacecraft and will receive full data image. An international program, the United Kingdom Defence Science and Technology Laboratory (DSTL) and Australian Defence Science and Technology Organisation (DSTO) plan to participate in TacSat-3 experiments.

  17. Content based image retrieval for matching images of improvised explosive devices in which snake initialization is viewed as an inverse problem

    NASA Astrophysics Data System (ADS)

    Acton, Scott T.; Gilliam, Andrew D.; Li, Bing; Rossi, Adam

    2008-02-01

    Improvised explosive devices (IEDs) are common and lethal instruments of terrorism, and linking a terrorist entity to a specific device remains a difficult task. In the effort to identify persons associated with a given IED, we have implemented a specialized content based image retrieval system to search and classify IED imagery. The system makes two contributions to the art. First, we introduce a shape-based matching technique exploiting shape, color, and texture (wavelet) information, based on novel vector field convolution active contours and a novel active contour initialization method which treats coarse segmentation as an inverse problem. Second, we introduce a unique graph theoretic approach to match annotated printed circuit board images for which no schematic or connectivity information is available. The shape-based image retrieval method, in conjunction with the graph theoretic tool, provides an efficacious system for matching IED images. For circuit imagery, the basic retrieval mechanism has a precision of 82.1% and the graph based method has a precision of 98.1%. As of the fall of 2007, the working system has processed over 400,000 case images.

  18. Visual Object Recognition and Tracking of Tools

    NASA Technical Reports Server (NTRS)

    English, James; Chang, Chu-Yin; Tardella, Neil

    2011-01-01

    A method has been created to automatically build an algorithm off-line, using computer-aided design (CAD) models, and to apply this at runtime. The object type is discriminated, and the position and orientation are identified. This system can work with a single image and can provide improved performance using multiple images provided from videos. The spatial processing unit uses three stages: (1) segmentation; (2) initial type, pose, and geometry (ITPG) estimation; and (3) refined type, pose, and geometry (RTPG) calculation. The image segmentation module files all the tools in an image and isolates them from the background. For this, the system uses edge-detection and thresholding to find the pixels that are part of a tool. After the pixels are identified, nearby pixels are grouped into blobs. These blobs represent the potential tools in the image and are the product of the segmentation algorithm. The second module uses matched filtering (or template matching). This approach is used for condensing synthetic images using an image subspace that captures key information. Three degrees of orientation, three degrees of position, and any number of degrees of freedom in geometry change are included. To do this, a template-matching framework is applied. This framework uses an off-line system for calculating template images, measurement images, and the measurements of the template images. These results are used online to match segmented tools against the templates. The final module is the RTPG processor. Its role is to find the exact states of the tools given initial conditions provided by the ITPG module. The requirement that the initial conditions exist allows this module to make use of a local search (whereas the ITPG module had global scope). To perform the local search, 3D model matching is used, where a synthetic image of the object is created and compared to the sensed data. The availability of low-cost PC graphics hardware allows rapid creation of synthetic images. In this approach, a function of orientation, distance, and articulation is defined as a metric on the difference between the captured image and a synthetic image with an object in the given orientation, distance, and articulation. The synthetic image is created using a model that is looked up in an object-model database. A composable software architecture is used for implementation. Video is first preprocessed to remove sensor anomalies (like dead pixels), and then is processed sequentially by a prioritized list of tracker-identifiers.

  19. Iterative reconstruction for CT perfusion with a prior-image induced hybrid nonlocal means regularization: Phantom studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Bin; Lyu, Qingwen; Ma, Jianhua

    2016-04-15

    Purpose: In computed tomography perfusion (CTP) imaging, an initial phase CT acquired with a high-dose protocol can be used to improve the image quality of later phase CT acquired with a low-dose protocol. For dynamic regions, signals in the later low-dose CT may not be completely recovered if the initial CT heavily regularizes the iterative reconstruction process. The authors propose a hybrid nonlocal means (hNLM) regularization model for iterative reconstruction of low-dose CTP to overcome the limitation of the conventional prior-image induced penalty. Methods: The hybrid penalty was constructed by combining the NLM of the initial phase high-dose CT inmore » the stationary region and later phase low-dose CT in the dynamic region. The stationary and dynamic regions were determined by the similarity between the initial high-dose scan and later low-dose scan. The similarity was defined as a Gaussian kernel-based distance between the patch-window of the same pixel in the two scans, and its measurement was then used to weigh the influence of the initial high-dose CT. For regions with high similarity (e.g., stationary region), initial high-dose CT played a dominant role for regularizing the solution. For regions with low similarity (e.g., dynamic region), the regularization relied on a low-dose scan itself. This new hNLM penalty was incorporated into the penalized weighted least-squares (PWLS) for CTP reconstruction. Digital and physical phantom studies were performed to evaluate the PWLS-hNLM algorithm. Results: Both phantom studies showed that the PWLS-hNLM algorithm is superior to the conventional prior-image induced penalty term without considering the signal changes within the dynamic region. In the dynamic region of the Catphan phantom, the reconstruction error measured by root mean square error was reduced by 42.9% in PWLS-hNLM reconstructed image. Conclusions: The PWLS-hNLM algorithm can effectively use the initial high-dose CT to reconstruct low-dose CTP in the stationary region while reducing its influence in the dynamic region.« less

  20. The infection algorithm: an artificial epidemic approach for dense stereo correspondence.

    PubMed

    Olague, Gustavo; Fernández, Francisco; Pérez, Cynthia B; Lutton, Evelyne

    2006-01-01

    We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.

  1. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    NASA Astrophysics Data System (ADS)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  2. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-11-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  3. A mixture neural net for multispectral imaging spectrometer processing

    NASA Technical Reports Server (NTRS)

    Casasent, David; Slagle, Timothy

    1990-01-01

    Each spatial region viewed by an imaging spectrometer contains various elements in a mixture. The elements present and the amount of each are to be determined. A neural net solution is considered. Initial optical neural net hardware is described. The first simulations on the component requirements of a neural net are considered. The pseudoinverse solution is shown to not suffice, i.e. a neural net solution is required.

  4. Imaging nanobubble nucleation and hydrogen spillover during electrocatalytic water splitting.

    PubMed

    Hao, Rui; Fan, Yunshan; Howard, Marco D; Vaughan, Joshua C; Zhang, Bo

    2018-06-05

    Nucleation and growth of hydrogen nanobubbles are key initial steps in electrochemical water splitting. These processes remain largely unexplored due to a lack of proper tools to probe the nanobubble's interfacial structure with sufficient spatial and temporal resolution. We report the use of superresolution microscopy to image transient formation and growth of single hydrogen nanobubbles at the electrode/solution interface during electrocatalytic water splitting. We found hydrogen nanobubbles can be generated even at very early stages in water electrolysis, i.e., ∼500 mV before reaching its thermodynamic reduction potential. The ability to image single nanobubbles on an electrode enabled us to observe in real time the process of hydrogen spillover from ultrathin gold nanocatalysts supported on indium-tin oxide.

  5. Massive ovarian edema, due to adjacent appendicitis.

    PubMed

    Callen, Andrew L; Illangasekare, Tushani; Poder, Liina

    2017-04-01

    Massive ovarian edema is a benign clinical entity, the imaging findings of which can mimic an adnexal mass or ovarian torsion. In the setting of acute abdominal pain, identifying massive ovarian edema is a key in avoiding potential fertility-threatening surgery in young women. In addition, it is important to consider other contributing pathology when ovarian edema is secondary to another process. We present a case of a young woman presenting with subacute abdominal pain, whose initial workup revealed marked enlarged right ovary. Further imaging, diagnostic tests, and eventually diagnostic laparoscopy revealed that the ovarian enlargement was secondary to subacute appendicitis, rather than a primary adnexal process. We review the classic ultrasound and MRI imaging findings and pitfalls that relate to this diagnosis.

  6. Redox-initiated hydrogel system for detection and real-time imaging of cellulolytic enzyme activity.

    PubMed

    Malinowska, Klara H; Verdorfer, Tobias; Meinhold, Aylin; Milles, Lukas F; Funk, Victor; Gaub, Hermann E; Nash, Michael A

    2014-10-01

    Understanding the process of biomass degradation by cellulolytic enzymes is of urgent importance for biofuel and chemical production. Optimizing pretreatment conditions and improving enzyme formulations both require assays to quantify saccharification products on solid substrates. Typically, such assays are performed using freely diffusing fluorophores or dyes that measure reducing polysaccharide chain ends. These methods have thus far not allowed spatial localization of hydrolysis activity to specific substrate locations with identifiable morphological features. Here we describe a hydrogel reagent signaling (HyReS) system that amplifies saccharification products and initiates crosslinking of a hydrogel that localizes to locations of cellulose hydrolysis, allowing for imaging of the degradation process in real time. Optical detection of the gel in a rapid parallel format on synthetic and natural pretreated solid substrates was used to quantify activity of T. emersonii and T. reesei enzyme cocktails. When combined with total internal reflection fluorescence microscopy and AFM imaging, the reagent system provided a means to visualize enzyme activity in real-time with high spatial resolution (<2 μm). These results demonstrate the versatility of the HyReS system in detecting cellulolytic enzyme activity and suggest new opportunities in real-time chemical imaging of biomass depolymerization. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. A Multimodal Approach to Counselor Supervision.

    ERIC Educational Resources Information Center

    Ponterotto, Joseph G.; Zander, Toni A.

    1984-01-01

    Represents an initial effort to apply Lazarus's multimodal approach to a model of counselor supervision. Includes continuously monitoring the trainee's behavior, affect, sensations, images, cognitions, interpersonal functioning, and when appropriate, biological functioning (diet and drugs) in the supervisory process. (LLL)

  8. SPIDER image processing for single-particle reconstruction of biological macromolecules from electron micrographs

    PubMed Central

    Shaikh, Tanvir R; Gao, Haixiao; Baxter, William T; Asturias, Francisco J; Boisset, Nicolas; Leith, Ardean; Frank, Joachim

    2009-01-01

    This protocol describes the reconstruction of biological molecules from the electron micrographs of single particles. Computation here is performed using the image-processing software SPIDER and can be managed using a graphical user interface, termed the SPIDER Reconstruction Engine. Two approaches are described to obtain an initial reconstruction: random-conical tilt and common lines. Once an existing model is available, reference-based alignment can be used, a procedure that can be iterated. Also described is supervised classification, a method to look for homogeneous subsets when multiple known conformations of the molecule may coexist. PMID:19180078

  9. Review of progress in quantitative NDE

    NASA Astrophysics Data System (ADS)

    s of 386 papers and plenary presentations are included. The plenary sessions are related to the national technology initiative. The other sessions covered the following NDE topics: corrosion, electromagnetic arrays, elastic wave scattering and backscattering/noise, civil structures, material properties, holography, shearography, UT wave propagation, eddy currents, coatings, signal processing, radiography, computed tomography, EM imaging, adhesive bonds, NMR, laser ultrasonics, composites, thermal techniques, magnetic measurements, nonlinear acoustics, interface modeling and characterization, UT transducers, new techniques, joined materials, probes and systems, fatigue cracks and fracture, imaging and sizing, NDE in engineering and process control, acoustics of cracks, and sensors. An author index is included.

  10. Improving Brain Magnetic Resonance Image (MRI) Segmentation via a Novel Algorithm based on Genetic and Regional Growth

    PubMed Central

    A., Javadpour; A., Mohammadi

    2016-01-01

    Background Regarding the importance of right diagnosis in medical applications, various methods have been exploited for processing medical images solar. The method of segmentation is used to analyze anal to miscall structures in medical imaging. Objective This study describes a new method for brain Magnetic Resonance Image (MRI) segmentation via a novel algorithm based on genetic and regional growth. Methods Among medical imaging methods, brains MRI segmentation is important due to high contrast of non-intrusive soft tissue and high spatial resolution. Size variations of brain tissues are often accompanied by various diseases such as Alzheimer’s disease. As our knowledge about the relation between various brain diseases and deviation of brain anatomy increases, MRI segmentation is exploited as the first step in early diagnosis. In this paper, regional growth method and auto-mate selection of initial points by genetic algorithm is used to introduce a new method for MRI segmentation. Primary pixels and similarity criterion are automatically by genetic algorithms to maximize the accuracy and validity in image segmentation. Results By using genetic algorithms and defining the fixed function of image segmentation, the initial points for the algorithm were found. The proposed algorithms are applied to the images and results are manually selected by regional growth in which the initial points were compared. The results showed that the proposed algorithm could reduce segmentation error effectively. Conclusion The study concluded that the proposed algorithm could reduce segmentation error effectively and help us to diagnose brain diseases. PMID:27672629

  11. Quantitative three-dimensional transrectal ultrasound (TRUS) for prostate imaging

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Aarnink, Rene G.; de la Rosette, Jean J.; Chalana, Vikram; Wijkstra, Hessel; Haynor, David R.; Debruyne, Frans M. J.; Kim, Yongmin

    1998-06-01

    With the number of men seeking medical care for prostate diseases rising steadily, the need of a fast and accurate prostate boundary detection and volume estimation tool is being increasingly experienced by the clinicians. Currently, these measurements are made manually, which results in a large examination time. A possible solution is to improve the efficiency by automating the boundary detection and volume estimation process with minimal involvement from the human experts. In this paper, we present an algorithm based on SNAKES to detect the boundaries. Our approach is to selectively enhance the contrast along the edges using an algorithm called sticks and integrate it with a SNAKES model. This integrated algorithm requires an initial curve for each ultrasound image to initiate the boundary detection process. We have used different schemes to generate the curves with a varying degree of automation and evaluated its effects on the algorithm performance. After the boundaries are identified, the prostate volume is calculated using planimetric volumetry. We have tested our algorithm on 6 different prostate volumes and compared the performance against the volumes manually measured by 3 experts. With the increase in the user inputs, the algorithm performance improved as expected. The results demonstrate that given an initial contour reasonably close to the prostate boundaries, the algorithm successfully delineates the prostate boundaries in an image, and the resulting volume measurements are in close agreement with those made by the human experts.

  12. Real-time CT-video registration for continuous endoscopic guidance

    NASA Astrophysics Data System (ADS)

    Merritt, Scott A.; Rai, Lav; Higgins, William E.

    2006-03-01

    Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.

  13. Modified tandem gratings anastigmatic imaging spectrometer with oblique incidence for spectral broadband

    NASA Astrophysics Data System (ADS)

    Cui, Chengguang; Wang, Shurong; Huang, Yu; Xue, Qingsheng; Li, Bo; Yu, Lei

    2015-09-01

    A modified spectrometer with tandem gratings that exhibits high spectral resolution and imaging quality for solar observation, monitoring, and understanding of coastal ocean processes is presented in this study. Spectral broadband anastigmatic imaging condition, spectral resolution, and initial optical structure are obtained based on geometric aberration theory. Compared with conventional tandem gratings spectrometers, this modified design permits flexibility in selecting gratings. A detailed discussion of the optical design and optical performance of an ultraviolet spectrometer with tandem gratings is also included to explain the advantage of oblique incidence for spectral broadband.

  14. Thermal imaging of afterburning plumes

    NASA Astrophysics Data System (ADS)

    Ajdari, E.; Gutmark, E.; Parr, T. P.; Wilson, K. J.; Schadow, K. C.

    1989-01-01

    Afterburning and nonafterburning exhaust plumes were studied experimentally for underexpanded sonic and supersonic conical circular nozzles. The plume structure was visualized using thermal imaging camera and regular photography. IR emission by the plume is mainly dependent on the presence of afterburning. Temperature and reducing power of the exhaust gases, in addition to the nozzle configuration, determine the structure of the plume core, the location where the afterburning is initiated, its size and intensity. Comparison between single shot and average thermal images of the plume show that afterburning is a highly turbulent combustion process.

  15. Evaluating the effectiveness of treatment of corneal ulcers via computer-based automatic image analysis

    NASA Astrophysics Data System (ADS)

    Otoum, Nesreen A.; Edirisinghe, Eran A.; Dua, Harminder; Faraj, Lana

    2012-06-01

    Corneal Ulcers are a common eye disease that requires prompt treatment. Recently a number of treatment approaches have been introduced that have been proven to be very effective. Unfortunately, the monitoring process of the treatment procedure remains manual and hence time consuming and prone to human errors. In this research we propose an automatic image analysis based approach to measure the size of an ulcer and its subsequent further investigation to determine the effectiveness of any treatment process followed. In Ophthalmology an ulcer area is detected for further inspection via luminous excitation of a dye. Usually in the imaging systems utilised for this purpose (i.e. a slit lamp with an appropriate dye) the ulcer area is excited to be luminous green in colour as compared to rest of the cornea which appears blue/brown. In the proposed approach we analyse the image in the HVS colour space. Initially a pre-processing stage that carries out a local histogram equalisation is used to bring back detail in any over or under exposed areas. Secondly we deal with the removal of potential reflections from the affected areas by making use of image registration of two candidate corneal images based on the detected corneal areas. Thirdly the exact corneal boundary is detected by initially registering an ellipse to the candidate corneal boundary detected via edge detection and subsequently allowing the user to modify the boundary to overlap with the boundary of the ulcer being observed. Although this step makes the approach semi automatic, it removes the impact of breakages of the corneal boundary due to occlusion, noise, image quality degradations. The ratio between the ulcer area confined within the corneal area to the corneal area is used as a measure of comparison. We demonstrate the use of the proposed tool in the analysis of the effectiveness of a treatment procedure adopted for corneal ulcers in patients by comparing the variation of corneal size over time.

  16. Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS)

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.

    2006-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) project is creating a record of forest disturbance and regrowth for North America from the Landsat satellite record, in support of the carbon modeling activities. LEDAPS relies on the decadal Landsat GeoCover data set supplemented by dense image time series for selected locations. Imagery is first atmospherically corrected to surface reflectance, and then change detection algorithms are used to extract disturbance area, type, and frequency. Reuse of the MODIS Land processing system (MODAPS) architecture allows rapid throughput of over 2200 MSS, TM, and ETM+ scenes. Initial ("Beta") surface reflectance products are currently available for testing, and initial continental disturbance products will be available by the middle of 2006.

  17. Rocket launcher mechanism of collaborative actin assembly defined by single-molecule imaging.

    PubMed

    Breitsprecher, Dennis; Jaiswal, Richa; Bombardier, Jeffrey P; Gould, Christopher J; Gelles, Jeff; Goode, Bruce L

    2012-06-01

    Interacting sets of actin assembly factors work together in cells, but the underlying mechanisms have remained obscure. We used triple-color single-molecule fluorescence microscopy to image the tumor suppressor adenomatous polyposis coli (APC) and the formin mDia1 during filament assembly. Complexes consisting of APC, mDia1, and actin monomers initiated actin filament formation, overcoming inhibition by capping protein and profilin. Upon filament polymerization, the complexes separated, with mDia1 moving processively on growing barbed ends while APC remained at the site of nucleation. Thus, the two assembly factors directly interact to initiate filament assembly and then separate but retain independent associations with either end of the growing filament.

  18. Attentional capture by emotional scenes across episodes in bipolar disorder: Evidence from a free-viewing task.

    PubMed

    García-Blanco, Ana; Salmerón, Ladislao; Perea, Manuel

    2015-05-01

    We examined whether the initial orienting, subsequent engagement, and overall allocation of attention are determined exogenously (i.e. by the affective valence of the stimulus) or endogenously (i.e. by the participant's mood) in the manic, depressive and euthymic episodes of bipolar disorder (BD). Participants were asked to compare the affective valence of two pictures (happy/threatening/neutral [emotional] vs. neutral [control]) while their eye movements were recorded in a free-viewing task. Results revealed that the initial orienting was exogenously captured by emotional images relative to control images. Importantly, engagement and overall allocation were endogenously captured by threatening images relative to neutral images in BD patients, regardless of their episode--this effect did not occur in a group of healthy controls. The threat-related bias in BD, which occurs even at the early stages of information processing (i.e. attentional engagement), may reflect a vulnerability marker. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Image Display in Local Database Networks

    NASA Astrophysics Data System (ADS)

    List, James S.; Olson, Frederick R.

    1989-05-01

    Dearchival of image data in the form of x-ray film provides a major challenge for radiology departments. In highly active referral environments such as tertiary care hospitals, patients may be referred to multiple clinical subspecialists within a very short time. Each clinical subspecialist frequently requires diagnostic image data to complete the diagnosis. This need for image access often interferes with the normal process of film handling and interpretation, subsequently reducing the efficiency of the department. The concept of creating a local image database on individual nursing stations utilizing the AT&T CommView Results Viewing Station (RVS) is being evaluated. Initial physician acceptance has been favorable. Objective measurements of operational productivity enhancements are in progress.

  20. Real-time image mosaicing for medical applications.

    PubMed

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  1. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  2. Generation of synthetic CT data using patient specific daily MR image data and image registration

    NASA Astrophysics Data System (ADS)

    Melanie Kraus, Kim; Jäkel, Oliver; Niebuhr, Nina I.; Pfaffenberger, Asja

    2017-02-01

    To fully exploit the advantages of magnetic resonance imaging (MRI) for radiotherapy (RT) treatment planning, a method is required to overcome the problem of lacking electron density information. We aim to establish and evaluate a new method for computed tomography (CT) data generation based on MRI and image registration. The thereby generated CT data is used for dose accumulation. We developed a process flow based on an initial pair of rigidly co-registered CT and T2-weighted MR image representing the same anatomical situation. Deformable image registration using anatomical landmarks is performed between the initial MRI data and daily MR images. The resulting transformation is applied to the initial CT, thus fractional CT data is generated. Furthermore, the dose for a photon intensity modulated RT (IMRT) or intensity modulated proton therapy (IMPT) plan is calculated on the generated fractional CT and accumulated on the initial CT via inverse transformation. The method is evaluated by the use of phantom CT and MRI data. Quantitative validation is performed by evaluation of the mean absolute error (MAE) between the measured and the generated CT. The effect on dose accumulation is examined by means of dose-volume parameters. One patient case is presented to demonstrate the applicability of the method introduced here. Overall, CT data derivation lead to MAEs with a median of 37.0 HU ranging from 29.9 to 66.6 HU for all investigated tissues. The accuracy of image registration showed to be limited in the case of unexpected air cavities and at tissue boundaries. The comparisons of dose distributions based on measured and generated CT data agree well with the published literature. Differences in dose volume parameters kept within 1.6% and 3.2% for photon and proton RT, respectively. The method presented here is particularly suited for application in adaptive RT in current clinical routine, since only minor additional technical equipment is required.

  3. Image segmentation via foreground and background semantic descriptors

    NASA Astrophysics Data System (ADS)

    Yuan, Ding; Qiang, Jingjing; Yin, Jihao

    2017-09-01

    In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.

  4. Method for Assessment of Changes in the Width of Cracks in Cement Composites with Use of Computer Image Processing and Analysis

    NASA Astrophysics Data System (ADS)

    Tomczak, Kamil; Jakubowski, Jacek; Fiołek, Przemysław

    2017-06-01

    Crack width measurement is an important element of research on the progress of self-healing cement composites. Due to the nature of this research, the method of measuring the width of cracks and their changes over time must meet specific requirements. The article presents a novel method of measuring crack width based on images from a scanner with an optical resolution of 6400 dpi, subject to initial image processing in the ImageJ development environment and further processing and analysis of results. After registering a series of images of the cracks at different times using SIFT conversion (Scale-Invariant Feature Transform), a dense network of line segments is created in all images, intersecting the cracks perpendicular to the local axes. Along these line segments, brightness profiles are extracted, which are the basis for determination of crack width. The distribution and rotation of the line of intersection in a regular layout, automation of transformations, management of images and profiles of brightness, and data analysis to determine the width of cracks and their changes over time are made automatically by own code in the ImageJ and VBA environment. The article describes the method, tests on its properties, sources of measurement uncertainty. It also presents an example of application of the method in research on autogenous self-healing of concrete, specifically the ability to reduce a sample crack width and its full closure within 28 days of the self-healing process.

  5. Grid Computing Application for Brain Magnetic Resonance Image Processing

    NASA Astrophysics Data System (ADS)

    Valdivia, F.; Crépeault, B.; Duchesne, S.

    2012-02-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  6. A novel forward projection-based metal artifact reduction method for flat-detector computed tomography.

    PubMed

    Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A

    2009-11-07

    Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).

  7. A robust embedded vision system feasible white balance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  8. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  9. Alzheimer’s Disease Neuroimaging Initiative biomarkers as quantitative phenotypes: Genetics core aims, progress, and plans

    PubMed Central

    Saykin, Andrew J.; Shen, Li; Foroud, Tatiana M.; Potkin, Steven G.; Swaminathan, Shanker; Kim, Sungeun; Risacher, Shannon L.; Nho, Kwangsik; Huentelman, Matthew J.; Craig, David W.; Thompson, Paul M.; Stein, Jason L.; Moore, Jason H.; Farrer, Lindsay A.; Green, Robert C.; Bertram, Lars; Jack, Clifford R.; Weiner, Michael W.

    2010-01-01

    The role of the Alzheimer’s Disease Neuroimaging Initiative Genetics Core is to facilitate the investigation of genetic influences on disease onset and trajectory as reflected in structural, functional, and molecular imaging changes; fluid biomarkers; and cognitive status. Major goals include (1) blood sample processing, genotyping, and dissemination, (2) genome-wide association studies (GWAS) of longitudinal phenotypic data, and (3) providing a central resource, point of contact and planning group for genetics within Alzheimer’s Disease Neuroimaging Initiative. Genome-wide array data have been publicly released and updated, and several neuroimaging GWAS have recently been reported examining baseline magnetic resonance imaging measures as quantitative phenotypes. Other preliminary investigations include copy number variation in mild cognitive impairment and Alzheimer’s disease and GWAS of baseline cerebrospinal fluid biomarkers and longitudinal changes on magnetic resonance imaging. Blood collection for RNA studies is a new direction. Genetic studies of longitudinal phenotypes hold promise for elucidating disease mechanisms and risk, development of therapeutic strategies, and refining selection criteria for clinical trials. PMID:20451875

  10. Combined neutron and x-ray imaging at the National Ignition Facility (invited)

    DOE PAGES

    Danly, C. R.; Christensen, K.; Fatherley, Valerie E.; ...

    2016-10-11

    X-ray and neutrons are commonly used to image Inertial Confinement Fusion implosions, providing key diagnostic information on the fuel assembly of burning DT fuel. The x-ray and neutron data provided are complementary as the production of neutrons and x-rays occur from different physical processes, but typically these two images are collected from different views with no opportunity for co-registration of the two images. Neutrons are produced where the DT fusion fuel is burning; X-rays are produced in regions corresponding to high temperatures. Processes such as mix of ablator material into the hotspot can result in increased x-ray production and decreasedmore » neutron production but can only be confidently observed if the two images are collected along the same line of sight and co-registered. To allow direct comparison of x-ray and neutron data, a Combined Neutron X-ray Imaging system has been tested at Omega and installed at the National Ignition Facility to collect an x-ray image along the currently installed neutron imaging line-of-sight. Here, this system is described, and initial results are presented along with prospects for definitive coregistration of the images.« less

  11. Combined neutron and x-ray imaging at the National Ignition Facility (invited).

    PubMed

    Danly, C R; Christensen, K; Fatherley, V E; Fittinghoff, D N; Grim, G P; Hibbard, R; Izumi, N; Jedlovec, D; Merrill, F E; Schmidt, D W; Simpson, R A; Skulina, K; Volegov, P L; Wilde, C H

    2016-11-01

    X-ray and neutrons are commonly used to image inertial confinement fusion implosions, providing key diagnostic information on the fuel assembly of burning deuterium-tritium (DT) fuel. The x-ray and neutron data provided are complementary as the production of neutrons and x-rays occurs from different physical processes, but typically these two images are collected from different views with no opportunity for co-registration of the two images. Neutrons are produced where the DT fusion fuel is burning; X-rays are produced in regions corresponding to high temperatures. Processes such as mix of ablator material into the hotspot can result in increased x-ray production and decreased neutron production but can only be confidently observed if the two images are collected along the same line of sight and co-registered. To allow direct comparison of x-ray and neutron data, a combined neutron x-ray imaging system has been tested at Omega and installed at the National Ignition Facility to collect an x-ray image along the currently installed neutron imaging line of sight. This system is described, and initial results are presented along with prospects for definitive coregistration of the images.

  12. Image-based tracking of the suturing needle during laparoscopic interventions

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kroehnert, A.; Bodenstedt, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2015-03-01

    One of the most complex and difficult tasks for surgeons during minimally invasive interventions is suturing. A prerequisite to assist the suturing process is the tracking of the needle. The endoscopic images provide a rich source of information which can be used for needle tracking. In this paper, we present an image-based method for markerless needle tracking. The method uses a color-based and geometry-based segmentation to detect the needle. Once an initial needle detection is obtained, a region of interest enclosing the extracted needle contour is passed on to a reduced segmentation. It is evaluated with in vivo images from da Vinci interventions.

  13. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  14. High speed imaging for assessment of impact damage in natural fibre biocomposites

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, Karthik Ram; Corn, Stephane; Le Moigne, Nicolas; Ienny, Patrick; Leger, Romain; Slangen, Pierre R.

    2017-06-01

    The use of Digital Image Correlation has been generally limited to the estimation of mechanical properties and fracture behaviour at low to moderate strain rates. High speed cameras dedicated to ballistic testing are often used to measure the initial and residual velocities of the projectile but rarely for damage assessment. The evaluation of impact damage is frequently achieved post-impact using visual inspection, ultrasonic C-scan or other NDI methods. Ultra-high speed cameras and developments in image processing have made possible the measurement of surface deformations and stresses in real time during dynamic cracking. In this paper, a method is presented to correlate the force- displacement data from the sensors to the slow motion tracking of the transient failure cracks using real-time high speed imaging. Natural fibre reinforced composites made of flax fibres and polypropylene matrix was chosen for the study. The creation of macro-cracks during the impact results in the loss of stiffness and a corresponding drop in the force history. However, optical instrumentation shows that the initiation of damage is not always evident and so the assessment of damage requires the use of a local approach. Digital Image Correlation is used to study the strain history of the composite and to identify the initiation and progression of damage. The effect of fly-speckled texture on strain measurement by image correlation is also studied. The developed method can be used for the evaluation of impact damage for different composite materials.

  15. MREG V1.1 : a multi-scale image registration algorithm for SAR applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichel, Paul H.

    2013-08-01

    MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962more » leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.« less

  16. Fine motor skills enhance lexical processing of embodied vocabulary: A test of the nimble-hands, nimble-minds hypothesis.

    PubMed

    Suggate, Sebastian; Stoeger, Heidrun

    2017-10-01

    Research suggests that fine motor skills (FMS) are linked to aspects of cognitive development in children. Additionally, lexical processing advantages exist for words implying a high body-object interaction (BOI), with initial findings indicating that such words in turn link to children's FMS-for which we propose and evaluate four competing hypotheses. First, a maturational account argues that any links between FMS and lexical processing should not exist once developmental variables are controlled for. Second, functionalism posits that any link between FMS and lexical processing arises due to environmental interactions. Third, the semantic richness hypothesis argues that sensorimotor input improves lexical processing, but predicts no links between FMS and lexical processing. A fourth account, the nimble-hands, nimble minds (NHNM) hypothesis, proposes that having greater FMS improves lexical processing for high-BOI words. In two experiments, the response latencies of preschool children (n = 90, n = 76, ages = 5;1) to 45 lexical items encompassing high-BOI, low-BOI, and less imageable words were measured, alongside measures of FMS, reasoning, and general receptive/expressive vocabulary. High-BOI words appeared to show unique links to FMS, which remained after accounting for low-BOI and less imageable words, general vocabulary, reasoning, and chronological age. Although further work is needed, the findings provide initial support for the NHNM hypothesis.

  17. A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image

    NASA Astrophysics Data System (ADS)

    Barat, Christian; Phlypo, Ronald

    2010-12-01

    We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.

  18. Miss-distance indicator for tank main gun systems

    NASA Astrophysics Data System (ADS)

    Bornstein, Jonathan A.; Hillis, David B.

    1994-07-01

    The initial development of a passive, automated system to track bullet trajectories near a target to determine the `miss distance,' and the corresponding correction necessary to bring the following round `on target' is discussed. The system consists of a visible wavelength CCD sensor, long focal length optics, and a separate IR sensor to detect the muzzle flash of the firing event; this is coupled to a `PC' based image processing and automatic tracking system designed to follow the projectile trajectory by intelligently comparing frame to frame variation of the projectile tracer image. An error analysis indicates that the device is particularly sensitive to variation of the projectile time of flight to the target, and requires development of algorithms to estimate this value from the 2D images employed by the sensor to monitor the projectile trajectory. Initial results obtained by using a brassboard prototype to track training ammunition are promising.

  19. Combined endeavor of Neutrosophic Set and Chan-Vese model to extract accurate liver image from CT scan.

    PubMed

    Siri, Sangeeta K; Latte, Mrityunjaya V

    2017-11-01

    Many different diseases can occur in the liver, including infections such as hepatitis, cirrhosis, cancer and over effect of medication or toxins. The foremost stage for computer-aided diagnosis of liver is the identification of liver region. Liver segmentation algorithms extract liver image from scan images which helps in virtual surgery simulation, speedup the diagnosis, accurate investigation and surgery planning. The existing liver segmentation algorithms try to extort exact liver image from abdominal Computed Tomography (CT) scan images. It is an open problem because of ambiguous boundaries, large variation in intensity distribution, variability of liver geometry from patient to patient and presence of noise. A novel approach is proposed to meet challenges in extracting the exact liver image from abdominal CT scan images. The proposed approach consists of three phases: (1) Pre-processing (2) CT scan image transformation to Neutrosophic Set (NS) and (3) Post-processing. In pre-processing, the noise is removed by median filter. The "new structure" is designed to transform a CT scan image into neutrosophic domain which is expressed using three membership subset: True subset (T), False subset (F) and Indeterminacy subset (I). This transform approximately extracts the liver image structure. In post processing phase, morphological operation is performed on indeterminacy subset (I) and apply Chan-Vese (C-V) model with detection of initial contour within liver without user intervention. This resulted in liver boundary identification with high accuracy. Experiments show that, the proposed method is effective, robust and comparable with existing algorithm for liver segmentation of CT scan images. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Plenoptic Ophthalmoscopy: A Novel Imaging Technique.

    PubMed

    Adam, Murtaza K; Aenchbacher, Weston; Kurzweg, Timothy; Hsu, Jason

    2016-11-01

    This prospective retinal imaging case series was designed to establish feasibility of plenoptic ophthalmoscopy (PO), a novel mydriatic fundus imaging technique. A custom variable intensity LED array light source adapter was created for the Lytro Gen1 light-field camera (Lytro, Mountain View, CA). Initial PO testing was performed on a model eye and rabbit fundi. PO image acquisition was then performed on dilated human subjects with a variety of retinal pathology and images were subjected to computational enhancement. The Lytro Gen1 light-field camera with custom LED array captured fundus images of eyes with diabetic retinopathy, age-related macular degeneration, retinal detachment, and other diagnoses. Post-acquisition computational processing allowed for refocusing and perspective shifting of retinal PO images, resulting in improved image quality. The application of PO to image the ocular fundus is feasible. Additional studies are needed to determine its potential clinical utility. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:1038-1043.]. Copyright 2016, SLACK Incorporated.

  1. VizieR Online Data Catalog: Hubble Source Catalog (V1 and V2) (Whitmore+, 2016)

    NASA Astrophysics Data System (ADS)

    Whitmore, B. C.; Allam, S. S.; Budavari, T.; Casertano, S.; Downes, R. A.; Donaldson, T.; Fall, S. M.; Lubow, S. H.; Quick, L.; Strolger, L.-G.; Wallace, G.; White, R. L.

    2016-10-01

    The HSC v1 contains members of the WFPC2, ACS/WFC, WFC3/UVIS and WFC3/IR Source Extractor source lists from HLA version DR8 (data release 8). The crossmatching process involves adjusting the relative astrometry of overlapping images so as to minimize positional offsets between closely aligned sources in different images. After correction, the astrometric residuals of crossmatched sources are significantly reduced, to typically less than 10mas. The relative astrometry is supported by using Pan-STARRS, SDSS, and 2MASS as the astrometric backbone for initial corrections. In addition, the catalog includes source nondetections. The crossmatching algorithms and the properties of the initial (Beta 0.1) catalog are described in Budavari & Lubow (2012ApJ...761..188B). The HSC v2 contains members of the WFPC2, ACS/WFC, WFC3/UVIS and WFC3/IR Source Extractor source lists from HLA version DR9.1 (data release 9.1). The crossmatching process involves adjusting the relative astrometry of overlapping images so as to minimize positional offsets between closely aligned sources in different images. After correction, the astrometric residuals of crossmatched sources are significantly reduced, to typically less than 10mas. The relative astrometry is supported by using Pan-STARRS, SDSS, and 2MASS as the astrometric backbone for initial corrections. In addition, the catalog includes source nondetections. The crossmatching algorithms and the properties of the initial (Beta 0.1) catalog are described in Budavari & Lubow (2012ApJ...761..188B). Hubble Source Catalog Acknowledgement: Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESAC/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). (2 data files).

  2. Modifications in SIFT-based 3D reconstruction from image sequence

    NASA Astrophysics Data System (ADS)

    Wei, Zhenzhong; Ding, Boshen; Wang, Wei

    2014-11-01

    In this paper, we aim to reconstruct 3D points of the scene from related images. Scale Invariant Feature Transform( SIFT) as a feature extraction and matching algorithm has been proposed and improved for years and has been widely used in image alignment and stitching, image recognition and 3D reconstruction. Because of the robustness and reliability of the SIFT's feature extracting and matching algorithm, we use it to find correspondences between images. Hence, we describe a SIFT-based method to reconstruct 3D sparse points from ordered images. In the process of matching, we make a modification in the process of finding the correct correspondences, and obtain a satisfying matching result. By rejecting the "questioned" points before initial matching could make the final matching more reliable. Given SIFT's attribute of being invariant to the image scale, rotation, and variable changes in environment, we propose a way to delete the multiple reconstructed points occurred in sequential reconstruction procedure, which improves the accuracy of the reconstruction. By removing the duplicated points, we avoid the possible collapsed situation caused by the inexactly initialization or the error accumulation. The limitation of some cases that all reprojected points are visible at all times also does not exist in our situation. "The small precision" could make a big change when the number of images increases. The paper shows the contrast between the modified algorithm and not. Moreover, we present an approach to evaluate the reconstruction by comparing the reconstructed angle and length ratio with actual value by using a calibration target in the scene. The proposed evaluation method is easy to be carried out and with a great applicable value. Even without the Internet image datasets, we could evaluate our own results. In this paper, the whole algorithm has been tested on several image sequences both on the internet and in our shots.

  3. Transfer and conversion of images based on EIT in atom vapor.

    PubMed

    Cao, Mingtao; Zhang, Liyun; Yu, Ya; Ye, Fengjuan; Wei, Dong; Guo, Wenge; Zhang, Shougang; Gao, Hong; Li, Fuli

    2014-05-01

    Transfer and conversion of images between different wavelengths or polarization has significant applications in optical communication and quantum information processing. We demonstrated the transfer of images based on electromagnetically induced transparency (EIT) in a rubidium vapor cell. In experiments, a 2D image generated by a spatial light modulator is used as a coupling field, and a plane wave served as a signal field. We found that the image carried by coupling field could be transferred to that carried by signal field, and the spatial patterns of transferred image are much better than that of the initial image. It also could be much smaller than that determined by the diffraction limit of the optical system. We also studied the subdiffraction propagation for the transferred image. Our results may have applications in quantum interference lithography and coherent Raman spectroscopy.

  4. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  5. Theoretical modeling of the subject: Western and Eastern types of human reflexion.

    PubMed

    Lefebvre, Vladimir A

    2017-12-01

    The author puts forth the hypothesis that mental phenomena are connected with thermodynamic properties of large neural network. A model of the subject with reflexion and capable for meditation is constructed. The processes of reflexion and meditation are presented as the sequence of heat engines. Each subsequent engine compensates for the imperfectness of the preceding engine by performing work equal to the lost available work of the preceding one. The sequence of heat engines is regarded as a chain of the subject's mental images of the self. Each engine can be interpreted as an image of the self that the engine next to it has, and the work performed by engines as the emotions that the subject and his images are experiencing. Two types of meditation are analyzed: The dissolution in nothingness and union with the Absolute. In the first type, the initial engine is the one that yields heat to the coldest reservoir, and in the second type, the initial engine is the one that takes heat from the hottest reservoir. The main concepts of thermodynamics are reviewed in relation to the process of human reflexion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Degradation and Deformation of Scarps and Slopes on Io: New Results

    NASA Technical Reports Server (NTRS)

    Moore, J. M.; Sullivan, R. J.; Pappalardo, R. T.; Turtle, E. P.

    2000-01-01

    Initial analysis of degradational processes on scarps and slopes on Io using just-acquired images by the Galileo SSI team. Among other results, is evidence for sublimation, sapping, and perhaps "glacial" flow of interstitial volatiles in relief-forming materials.

  7. Faint Debris Detection by Particle Based Track-Before-Detect Method

    NASA Astrophysics Data System (ADS)

    Uetsuhara, M.; Ikoma, N.

    2014-09-01

    This study proposes a particle method to detect faint debris, which is hardly seen in single frame, from an image sequence based on the concept of track-before-detect (TBD). The most widely used detection method is detect-before-track (DBT), which firstly detects signals of targets from single frame by distinguishing difference of intensity between foreground and background then associate the signals for each target between frames. DBT is capable of tracking bright targets but limited. DBT is necessary to consider presence of false signals and is difficult to recover from false association. On the other hand, TBD methods try to track targets without explicitly detecting the signals followed by evaluation of goodness of each track and obtaining detection results. TBD has an advantage over DBT in detecting weak signals around background level in single frame. However, conventional TBD methods for debris detection apply brute-force search over candidate tracks then manually select true one from the candidates. To reduce those significant drawbacks of brute-force search and not-fully automated process, this study proposes a faint debris detection algorithm by a particle based TBD method consisting of sequential update of target state and heuristic search of initial state. The state consists of position, velocity direction and magnitude, and size of debris over the image at a single frame. The sequential update process is implemented by a particle filter (PF). PF is an optimal filtering technique that requires initial distribution of target state as a prior knowledge. An evolutional algorithm (EA) is utilized to search the initial distribution. The EA iteratively applies propagation and likelihood evaluation of particles for the same image sequences and resulting set of particles is used as an initial distribution of PF. This paper describes the algorithm of the proposed faint debris detection method. The algorithm demonstrates performance on image sequences acquired during observation campaigns dedicated to GEO breakup fragments, which would contain a sufficient number of faint debris images. The results indicate the proposed method is capable of tracking faint debris with moderate computational costs at operational level.

  8. Increasing the dynamic range of CMOS photodiode imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce R. (Inventor)

    2007-01-01

    A multiple-step reset process and circuit for resetting a voltage stored on a photodiode of an imaging device. A first stage of the reset occurs while a source and a drain of a pixel source-follower transistor are held at ground potential and the photodiode and a gate of the pixel source-follower transistor are charged to an initial reset voltage having potential less that of a supply voltage. A second stage of the reset occurs after the initial reset voltage is stored on the photodiode and the gate of the pixel source-follower transistor and the source and drain voltages of the pixel source-follower transistor are released from ground potential thereby allowing the source and drain voltages of the pixel source-follower transistor to assume ordinary values above ground potential and resulting in a capacitive feed-through effect that increases the voltage on the photodiode to a value greater than the initial reset voltage.

  9. Capacitive micromachined ultrasonic transducers for medical imaging and therapy.

    PubMed

    Khuri-Yakub, Butrus T; Oralkan, Omer

    2011-05-01

    Capacitive micromachined ultrasonic transducers (CMUTs) have been subject to extensive research for the last two decades. Although they were initially developed for air-coupled applications, today their main application space is medical imaging and therapy. This paper first presents a brief description of CMUTs, their basic structure, and operating principles. Our progression of developing several generations of fabrication processes is discussed with an emphasis on the advantages and disadvantages of each process. Monolithic and hybrid approaches for integrating CMUTs with supporting integrated circuits are surveyed. Several prototype transducer arrays with integrated frontend electronic circuits we developed and their use for 2-D and 3-D, anatomical and functional imaging, and ablative therapies are described. The presented results prove the CMUT as a MEMS technology for many medical diagnostic and therapeutic applications.

  10. An AK-LDMeans algorithm based on image clustering

    NASA Astrophysics Data System (ADS)

    Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan

    2018-03-01

    Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.

  11. Capacitive micromachined ultrasonic transducers for medical imaging and therapy

    PubMed Central

    Khuri-Yakub, Butrus T.; Oralkan, Ömer

    2011-01-01

    Capacitive micromachined ultrasonic transducers (CMUTs) have been subject to extensive research for the last two decades. Although they were initially developed for air-coupled applications, today their main application space is medical imaging and therapy. This paper first presents a brief description of CMUTs, their basic structure, and operating principles. Our progression of developing several generations of fabrication processes is discussed with an emphasis on the advantages and disadvantages of each process. Monolithic and hybrid approaches for integrating CMUTs with supporting integrated circuits are surveyed. Several prototype transducer arrays with integrated frontend electronic circuits we developed and their use for 2-D and 3-D, anatomical and functional imaging, and ablative therapies are described. The presented results prove the CMUT as a MEMS technology for many medical diagnostic and therapeutic applications. PMID:21860542

  12. Parametric dense stereovision implementation on a system-on chip (SoC).

    PubMed

    Gardel, Alfredo; Montejo, Pablo; García, Jorge; Bravo, Ignacio; Lázaro, José L

    2012-01-01

    This paper proposes a novel hardware implementation of a dense recovery of stereovision 3D measurements. Traditionally 3D stereo systems have imposed the maximum number of stereo correspondences, introducing a large restriction on artificial vision algorithms. The proposed system-on-chip (SoC) provides great performance and efficiency, with a scalable architecture available for many different situations, addressing real time processing of stereo image flow. Using double buffering techniques properly combined with pipelined processing, the use of reconfigurable hardware achieves a parametrisable SoC which gives the designer the opportunity to decide its right dimension and features. The proposed architecture does not need any external memory because the processing is done as image flow arrives. Our SoC provides 3D data directly without the storage of whole stereo images. Our goal is to obtain high processing speed while maintaining the accuracy of 3D data using minimum resources. Configurable parameters may be controlled by later/parallel stages of the vision algorithm executed on an embedded processor. Considering hardware FPGA clock of 100 MHz, image flows up to 50 frames per second (fps) of dense stereo maps of more than 30,000 depth points could be obtained considering 2 Mpix images, with a minimum initial latency. The implementation of computer vision algorithms on reconfigurable hardware, explicitly low level processing, opens up the prospect of its use in autonomous systems, and they can act as a coprocessor to reconstruct 3D images with high density information in real time.

  13. Open framework for management and processing of multi-modality and multidimensional imaging data for analysis and modelling muscular function

    NASA Astrophysics Data System (ADS)

    García Juan, David; Delattre, Bénédicte M. A.; Trombella, Sara; Lynch, Sean; Becker, Matthias; Choi, Hon Fai; Ratib, Osman

    2014-03-01

    Musculoskeletal disorders (MSD) are becoming a big healthcare economical burden in developed countries with aging population. Classical methods like biopsy or EMG used in clinical practice for muscle assessment are invasive and not accurately sufficient for measurement of impairments of muscular performance. Non-invasive imaging techniques can nowadays provide effective alternatives for static and dynamic assessment of muscle function. In this paper we present work aimed toward the development of a generic data structure for handling n-dimensional metabolic and anatomical data acquired from hybrid PET/MR scanners. Special static and dynamic protocols were developed for assessment of physical and functional images of individual muscles of the lower limb. In an initial stage of the project a manual segmentation of selected muscles was performed on high-resolution 3D static images and subsequently interpolated to full dynamic set of contours from selected 2D dynamic images across different levels of the leg. This results in a full set of 4D data of lower limb muscles at rest and during exercise. These data can further be extended to a 5D data by adding metabolic data obtained from PET images. Our data structure and corresponding image processing extension allows for better evaluation of large volumes of multidimensional imaging data that are acquired and processed to generate dynamic models of the moving lower limb and its muscular function.

  14. Multislice CT perfusion imaging of the lung in detection of pulmonary embolism

    NASA Astrophysics Data System (ADS)

    Hong, Helen; Lee, Jeongjin

    2006-03-01

    We propose a new subtraction technique for accurately imaging lung perfusion and efficiently detecting pulmonary embolism in chest MDCT angiography. Our method is composed of five stages. First, optimal segmentation technique is performed for extracting same volume of the lungs, major airway and vascular structures from pre- and post-contrast images with different lung density. Second, initial registration based on apex, hilar point and center of inertia (COI) of each unilateral lung is proposed to correct the gross translational mismatch. Third, initial alignment is refined by iterative surface registration. For fast and robust convergence of the distance measure to the optimal value, a 3D distance map is generated by the narrow-band distance propagation. Fourth, 3D nonlinear filter is applied to the lung parenchyma to compensate for residual spiral artifacts and artifacts caused by heart motion. Fifth, enhanced vessels are visualized by subtracting registered pre-contrast images from post-contrast images. To facilitate visualization of parenchyma enhancement, color-coded mapping and image fusion is used. Our method has been successfully applied to ten patients of pre- and post-contrast images in chest MDCT angiography. Experimental results show that the performance of our method is very promising compared with conventional methods with the aspects of its visual inspection, accuracy and processing time.

  15. [Analysis of 163 rib fractures by imaging examination].

    PubMed

    Song, Tian-fu; Wang, Chao-chao

    2014-12-01

    To explore the applications of imaging examination on rib fracture sites in forensic identification. Features including the sites, numbers of the processed imaging examination and the first radiological technology at diagnosis in 56 cases of rib fractures from 163 injuries were retrospectively analyzed. The detection rate of the rib fractures within 14 days was 65.6%. The initial detection rate of anterior rib fracture proceeded by X-ray was 76.2%, then 90.5% detected at a second time X-ray, while the detection rate of CT was 66.7% and 80.0%, respectively. The initial detec- tion rate of rib fracture in axillary section proceeded by X-ray was 27.6%, then 58.6% detected at a second time X-ray, while the detection rate of CT was 54.3% and 80.4%, respectively. The initial detection rate of posterior rib fracture proceeded by X-ray was 63.6%, then 81.8% detected at a second time X-ray, while the detection rate of CT was 50.0% and 70.0%, respectively. It is important to pay attention to the use of combined imaging examinations and the follow-up results. In the cases of suspicious for rib fracture in axillary section, CT examination is suggested in such false X-ray negative cases.

  16. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  17. AAPM/RSNA physics tutorial for residents. Topics in US: B-mode US: basic concepts and new technology.

    PubMed

    Hangiandreou, Nicholas J

    2003-01-01

    Ultrasonography (US) has been used in medical imaging for over half a century. Current US scanners are based largely on the same basic principles used in the initial devices for human imaging. Modern equipment uses a pulse-echo approach with a brightness-mode (B-mode) display. Fundamental aspects of the B-mode imaging process include basic ultrasound physics, interactions of ultrasound with tissue, ultrasound pulse formation, scanning the ultrasound beam, and echo detection and signal processing. Recent technical innovations that have been developed to improve the performance of modern US equipment include the following: tissue harmonic imaging, spatial compound imaging, extended field of view imaging, coded pulse excitation, electronic section focusing, three-dimensional and four-dimensional imaging, and the general trend toward equipment miniaturization. US is a relatively inexpensive, portable, safe, and real-time modality, all of which make it one of the most widely used imaging modalities in medicine. Although B-mode US is sometimes referred to as a mature technology, this modality continues to experience a significant evolution in capability with even more exciting developments on the horizon. Copyright RSNA, 2003

  18. Roles of ON Cone Bipolar Cell Subtypes in Temporal Coding in the Mouse Retina

    PubMed Central

    Fyk-Kolodziej, Bozena; Cohn, Jesse

    2014-01-01

    In the visual system, diverse image processing starts with bipolar cells, which are the second-order neurons of the retina. Thirteen subtypes of bipolar cells have been identified, which are thought to encode different features of image signaling and to initiate distinct signal-processing streams. Although morphologically identified, the functional roles of each bipolar cell subtype in visual signal encoding are not fully understood. Here, we investigated how ON cone bipolar cells of the mouse retina encode diverse temporal image signaling. We recorded bipolar cell voltage changes in response to two different input functions: sinusoidal light and step light stimuli. Temporal tuning in ON cone bipolar cells was diverse and occurred in a subtype-dependent manner. Subtypes 5s and 8 exhibited low-pass filtering property in response to a sinusoidal light stimulus, and responded with sustained fashion to step-light stimulation. Conversely, subtypes 5f, 6, 7, and XBC exhibited bandpass filtering property in response to sinusoidal light stimuli, and responded transiently to step-light stimuli. In particular, subtypes 7 and XBC were high-temporal tuning cells. We recorded responses in different ways to further examine the underlying mechanisms of temporal tuning. Current injection evoked low-pass filtering, whereas light responses in voltage-clamp mode produced bandpass filtering in all ON bipolar cells. These findings suggest that cone photoreceptor inputs shape bandpass filtering in bipolar cells, whereas intrinsic properties of bipolar cells shape low-pass filtering. Together, our results demonstrate that ON bipolar cells encode diverse temporal image signaling in a subtype-dependent manner to initiate temporal visual information-processing pathways. PMID:24966376

  19. Image enhancement using MCNP5 code and MATLAB in neutron radiography.

    PubMed

    Tharwat, Montaser; Mohamed, Nader; Mongy, T

    2014-07-01

    This work presents a method that can be used to enhance the neutron radiography (NR) image for objects with high scattering materials like hydrogen, carbon and other light materials. This method used Monte Carlo code, MCNP5, to simulate the NR process and get the flux distribution for each pixel of the image and determines the scattered neutron distribution that caused image blur, and then uses MATLAB to subtract this scattered neutron distribution from the initial image to improve its quality. This work was performed before the commissioning of digital NR system in Jan. 2013. The MATLAB enhancement method is quite a good technique in the case of static based film neutron radiography, while in neutron imaging (NI) technique, image enhancement and quantitative measurement were efficient by using ImageJ software. The enhanced image quality and quantitative measurements were presented in this work. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. TU-H-206-01: An Automated Approach for Identifying Geometric Distortions in Gamma Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, S; Nelson, J; Samei, E

    2016-06-15

    Purpose: To develop a clinically-deployable, automated process for detecting artifacts in routine nuclear medicine (NM) quality assurance (QA) bar phantom images. Methods: An artifact detection algorithm was created to analyze bar phantom images as part of an ongoing QA program. A low noise, high resolution reference image was acquired from an x-ray of the bar phantom with a Philips Digital Diagnost system utilizing image stitching. NM bar images, acquired for 5 million counts over a 512×512 matrix, were registered to the template image by maximizing mutual information (MI). The MI index was used as an initial test for artifacts; lowmore » values indicate an overall presence of distortions regardless of their spatial location. Images with low MI scores were further analyzed for bar linearity, periodicity, alignment, and compression to locate differences with respect to the template. Findings from each test were spatially correlated and locations failing multiple tests were flagged as potential artifacts requiring additional visual analysis. The algorithm was initially deployed for GE Discovery 670 and Infinia Hawkeye gamma cameras. Results: The algorithm successfully identified clinically relevant artifacts from both systems previously unnoticed by technologists performing the QA. Average MI indices for artifact-free images are 0.55. Images with MI indices < 0.50 have shown 100% sensitivity and specificity for artifact detection when compared with a thorough visual analysis. Correlation of geometric tests confirms the ability to spatially locate the most likely image regions containing an artifact regardless of initial phantom orientation. Conclusion: The algorithm shows the potential to detect gamma camera artifacts that may be missed by routine technologist inspections. Detection and subsequent correction of artifacts ensures maximum image quality and may help to identify failing hardware before it impacts clinical workflow. Going forward, the algorithm is being deployed to monitor data from all gamma cameras within our health system.« less

  1. Use of micro computed-tomography and 3D printing for reverse engineering of mouse embryo nasal capsule

    NASA Astrophysics Data System (ADS)

    Tesařová, M.; Zikmund, T.; Kaucká, M.; Adameyko, I.; Jaroš, J.; Paloušek, D.; Škaroupka, D.; Kaiser, J.

    2016-03-01

    Imaging of increasingly complex cartilage in vertebrate embryos is one of the key tasks of developmental biology. This is especially important to study shape-organizing processes during initial skeletal formation and growth. Advanced imaging techniques that are reflecting biological needs give a powerful impulse to push the boundaries of biological visualization. Recently, techniques for contrasting tissues and organs have improved considerably, extending traditional 2D imaging approaches to 3D . X-ray micro computed tomography (μCT), which allows 3D imaging of biological objects including their internal structures with a resolution in the micrometer range, in combination with contrasting techniques seems to be the most suitable approach for non-destructive imaging of embryonic developing cartilage. Despite there are many software-based ways for visualization of 3D data sets, having a real solid model of the studied object might give novel opportunities to fully understand the shape-organizing processes in the developing body. In this feasibility study we demonstrated the full procedure of creating a real 3D object of mouse embryo nasal capsule, i.e. the staining, the μCT scanning combined by the advanced data processing and the 3D printing.

  2. The Use of Multidimensional Image-Based Analysis to Accurately Monitor Cell Growth in 3D Bioreactor Culture

    PubMed Central

    Baradez, Marc-Olivier; Marshall, Damian

    2011-01-01

    The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells. PMID:22028809

  3. The use of multidimensional image-based analysis to accurately monitor cell growth in 3D bioreactor culture.

    PubMed

    Baradez, Marc-Olivier; Marshall, Damian

    2011-01-01

    The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells.

  4. Geometric registration of remotely sensed data with SAMIR

    NASA Astrophysics Data System (ADS)

    Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto

    2015-06-01

    The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.

  5. Paediatric x-ray radiation dose reduction and image quality analysis.

    PubMed

    Martin, L; Ruddlesden, R; Makepeace, C; Robinson, L; Mistry, T; Starritt, H

    2013-09-01

    Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%-55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children.

  6. Segmentation of the glottal space from laryngeal images using the watershed transform.

    PubMed

    Osma-Ruiz, Víctor; Godino-Llorente, Juan I; Sáenz-Lechón, Nicolás; Fraile, Rubén

    2008-04-01

    The present work describes a new method for the automatic detection of the glottal space from laryngeal images obtained either with high speed or with conventional video cameras attached to a laryngoscope. The detection is based on the combination of several relevant techniques in the field of digital image processing. The image is segmented with a watershed transform followed by a region merging, while the final decision is taken using a simple linear predictor. This scheme has successfully segmented the glottal space in all the test images used. The method presented can be considered a generalist approach for the segmentation of the glottal space because, in contrast with other methods found in literature, this approach does not need either initialization or finding strict environmental conditions extracted from the images to be processed. Therefore, the main advantage is that the user does not have to outline the region of interest with a mouse click. In any case, some a priori knowledge about the glottal space is needed, but this a priori knowledge can be considered weak compared to the environmental conditions fixed in former works.

  7. Serious Gaming Technologies Support Human Factors Investigations of Advanced Interfaces for Semi-Autonomous Vehicles

    DTIC Science & Technology

    2006-06-01

    conventional camera vs. thermal imager vs. night vision; camera field of view (narrow, wide, panoramic); keyboard + mouse vs. joystick control vs...motorised platform which could scan the immediate area, producing a 360o panorama of “stitched-together” digital pictures. The picture file, together with...VBS was used to automate the process of creating a QuickTime panorama (.mov or .qt), which includes the initial retrieval of the images, the

  8. Targeting SRC Family Kinases and HSP90 in Lung Cancer

    DTIC Science & Technology

    2016-12-01

    inhalation of Adeno-Cre, followed by MRI imaging at regular intervals to detect tumor initiation and growth, followed by euthanasia and processing of...experimental endpoint. 10 mice were used per time point Representative MRI data describing tumor volume (TV) are shown in Figure 1. Quantification of data is...dasatinib, we were able to make several conclusions. Figure 1. Representative MRI images from Nedd9wt or Nedd9 null Kras mutant mice, treated with

  9. Fluctuations of the transcription factor ATML1 generate the pattern of giant cells in the Arabidopsis sepal

    PubMed Central

    Meyer, Heather M; Teles, José; Formosa-Jordan, Pau; Refahi, Yassin; San-Bento, Rita; Ingram, Gwyneth; Jönsson, Henrik; Locke, James C W; Roeder, Adrienne H K

    2017-01-01

    Multicellular development produces patterns of specialized cell types. Yet, it is often unclear how individual cells within a field of identical cells initiate the patterning process. Using live imaging, quantitative image analyses and modeling, we show that during Arabidopsis thaliana sepal development, fluctuations in the concentration of the transcription factor ATML1 pattern a field of identical epidermal cells to differentiate into giant cells interspersed between smaller cells. We find that ATML1 is expressed in all epidermal cells. However, its level fluctuates in each of these cells. If ATML1 levels surpass a threshold during the G2 phase of the cell cycle, the cell will likely enter a state of endoreduplication and become giant. Otherwise, the cell divides. Our results demonstrate a fluctuation-driven patterning mechanism for how cell fate decisions can be initiated through a random yet tightly regulated process. DOI: http://dx.doi.org/10.7554/eLife.19131.001 PMID:28145865

  10. NASA's small spacecraft technology initiative _Clark_ spacecraft

    NASA Astrophysics Data System (ADS)

    Hayduk, Robert J.; Scott, Walter S.; Walberg, Gerald D.; Butts, James J.; Starr, Richard D.

    1996-11-01

    The Small Satellite Technology Initiative (SSTI) is a National Aeronautics and Space Administration (NASA) program to demonstrate smaller, high technology satellites constructed rapidly and less expensively. Under SSTI, NASA funded the development of "Clark," a high technology demonstration satellite to provide 3-m resolution panchromatic and 15-m resolution multispectral images, as well as collect atmospheric constituent and cosmic x-ray data. The 690-Ib. satellite, to be launched in early 1997, will be in a 476 km, circular, sun-synchronous polar orbit. This paper describes the program objectives, the technical characteristics of the sensors and satellite, image processing, archiving and distribution. Data archiving and distribution will be performed by NASA Stennis Space Center and by the EROS Data Center, Sioux Falls, South Dakota, USA.

  11. Development of an inexpensive optical method for studies of dental erosion process in vitro

    NASA Astrophysics Data System (ADS)

    Nasution, A. M. T.; Noerjanto, B.; Triwanto, L.

    2008-09-01

    Teeth have important roles in digestion of food, supporting the facial-structure, as well as in articulation of speech. Abnormality in teeth structure can be initiated by an erosion process due to diet or beverages consumption that lead to destruction which affect their functionality. Research to study the erosion processes that lead to teeth's abnormality is important in order to be used as a care and prevention purpose. Accurate measurement methods would be necessary as a research tool, in order to be capable for quantifying dental destruction's degree. In this work an inexpensive optical method as tool to study dental erosion process is developed. It is based on extraction the parameters from the 3D dental visual information. The 3D visual image is obtained from reconstruction of multiple lateral projection of 2D images that captured from many angles. Using a simple motor stepper and a pocket digital camera, sequence of multi-projection 2D images of premolar tooth is obtained. This images are then reconstructed to produce a 3D image, which is useful for quantifying related dental erosion parameters. The quantification process is obtained from the shrinkage of dental volume as well as surface properties due to erosion process. Results of quantification is correlated to the ones of dissolved calcium atom which released from the tooth using atomic absorption spectrometry. This proposed method would be useful as visualization tool in many engineering, dentistry, and medical research. It would be useful also for the educational purposes.

  12. Radiometric and Geometric Accuracy Analysis of Rasat Pan Imagery

    NASA Astrophysics Data System (ADS)

    Kocaman, S.; Yalcin, I.; Guler, M.

    2016-06-01

    RASAT is the second Turkish Earth Observation satellite which was launched in 2011. It operates with pushbroom principle and acquires panchromatic and MS images with 7.5 m and 15 m resolutions, respectively. The swath width of the sensor is 30 km. The main aim of this study is to analyse the radiometric and geometric quality of RASAT images. A systematic validation approach for the RASAT imagery and its products is being applied. RASAT image pair acquired over Kesan city in Edirne province of Turkey are used for the investigations. The raw RASAT data (L0) are processed by Turkish Space Agency (TUBITAK-UZAY) to produce higher level image products. The image products include radiometrically processed (L1), georeferenced (L2) and orthorectified (L3) data, as well as pansharpened images. The image quality assessments include visual inspections, noise, MTF and histogram analyses. The geometric accuracy assessment results are only preliminary and the assessment is performed using the raw images. The geometric accuracy potential is investigated using 3D ground control points extracted from road intersections, which were measured manually in stereo from aerial images with 20 cm resolution and accuracy. The initial results of the study, which were performed using one RASAT panchromatic image pair, are presented in this paper.

  13. Active learning methods for interactive image retrieval.

    PubMed

    Gosselin, Philippe Henri; Cord, Matthieu

    2008-07-01

    Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.

  14. MIRIADS: miniature infrared imaging applications development system description and operation

    NASA Astrophysics Data System (ADS)

    Baxter, Christopher R.; Massie, Mark A.; McCarley, Paul L.; Couture, Michael E.

    2001-10-01

    A cooperative effort between the U.S. Air Force Research Laboratory, Nova Research, Inc., the Raytheon Infrared Operations (RIO) and Optics 1, Inc. has successfully produced a miniature infrared camera system that offers significant real-time signal and image processing capabilities by virtue of its modular design. This paper will present an operational overview of the system as well as results from initial testing of the 'Modular Infrared Imaging Applications Development System' (MIRIADS) configured as a missile early-warning detection system. The MIRIADS device can operate virtually any infrared focal plane array (FPA) that currently exists. Programmable on-board logic applies user-defined processing functions to the real-time digital image data for a variety of functions. Daughterboards may be plugged onto the system to expand the digital and analog processing capabilities of the system. A unique full hemispherical infrared fisheye optical system designed and produced by Optics 1, Inc. is utilized by the MIRIADS in a missile warning application to demonstrate the flexibility of the overall system to be applied to a variety of current and future AFRL missions.

  15. Spectral-element simulations of wave propagation in complex exploration-industry models: Imaging and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.

  16. Image quality prediction - An aid to the Viking lander imaging investigation on Mars

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Wall, S. D.

    1976-01-01

    Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).

  17. Time-lapse Raman imaging of osteoblast differentiation

    PubMed Central

    Hashimoto, Aya; Yamaguchi, Yoshinori; Chiu, Liang-da; Morimoto, Chiaki; Fujita, Katsumasa; Takedachi, Masahide; Kawata, Satoshi; Murakami, Shinya; Tamiya, Eiichi

    2015-01-01

    Osteoblastic mineralization occurs during the early stages of bone formation. During this mineralization, hydroxyapatite (HA), a major component of bone, is synthesized, generating hard tissue. Many of the mechanisms driving biomineralization remain unclear because the traditional biochemical assays used to investigate them are destructive techniques incompatible with viable cells. To determine the temporal changes in mineralization-related biomolecules at mineralization spots, we performed time-lapse Raman imaging of mouse osteoblasts at a subcellular resolution throughout the mineralization process. Raman imaging enabled us to analyze the dynamics of the related biomolecules at mineralization spots throughout the entire process of mineralization. Here, we stimulated KUSA-A1 cells to differentiate into osteoblasts and conducted time-lapse Raman imaging on them every 4 hours for 24 hours, beginning 5 days after the stimulation. The HA and cytochrome c Raman bands were used as markers for osteoblastic mineralization and apoptosis. From the Raman images successfully acquired throughout the mineralization process, we found that β-carotene acts as a biomarker that indicates the initiation of osteoblastic mineralization. A fluctuation of cytochrome c concentration, which indicates cell apoptosis, was also observed during mineralization. We expect time-lapse Raman imaging to help us to further elucidate osteoblastic mineralization mechanisms that have previously been unobservable. PMID:26211729

  18. Time-lapse Raman imaging of osteoblast differentiation

    NASA Astrophysics Data System (ADS)

    Hashimoto, Aya; Yamaguchi, Yoshinori; Chiu, Liang-Da; Morimoto, Chiaki; Fujita, Katsumasa; Takedachi, Masahide; Kawata, Satoshi; Murakami, Shinya; Tamiya, Eiichi

    2015-07-01

    Osteoblastic mineralization occurs during the early stages of bone formation. During this mineralization, hydroxyapatite (HA), a major component of bone, is synthesized, generating hard tissue. Many of the mechanisms driving biomineralization remain unclear because the traditional biochemical assays used to investigate them are destructive techniques incompatible with viable cells. To determine the temporal changes in mineralization-related biomolecules at mineralization spots, we performed time-lapse Raman imaging of mouse osteoblasts at a subcellular resolution throughout the mineralization process. Raman imaging enabled us to analyze the dynamics of the related biomolecules at mineralization spots throughout the entire process of mineralization. Here, we stimulated KUSA-A1 cells to differentiate into osteoblasts and conducted time-lapse Raman imaging on them every 4 hours for 24 hours, beginning 5 days after the stimulation. The HA and cytochrome c Raman bands were used as markers for osteoblastic mineralization and apoptosis. From the Raman images successfully acquired throughout the mineralization process, we found that β-carotene acts as a biomarker that indicates the initiation of osteoblastic mineralization. A fluctuation of cytochrome c concentration, which indicates cell apoptosis, was also observed during mineralization. We expect time-lapse Raman imaging to help us to further elucidate osteoblastic mineralization mechanisms that have previously been unobservable.

  19. A novel mesh processing based technique for 3D plant analysis

    PubMed Central

    2012-01-01

    Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969

  20. Comparative Analysis of Reconstructed Image Quality in a Simulated Chromotomographic Imager

    DTIC Science & Technology

    2014-03-01

    quality . This example uses five basic images a backlit bar chart with random intensity, 100 nm separation. A total of 54 initial target...compared for a variety of scenes. Reconstructed image quality is highly dependent on the initial target hypercube so a total of 54 initial target...COMPARATIVE ANALYSIS OF RECONSTRUCTED IMAGE QUALITY IN A SIMULATED CHROMOTOMOGRAPHIC IMAGER THESIS

  1. Improving Pattern Recognition and Neural Network Algorithms with Applications to Solar Panel Energy Optimization

    NASA Astrophysics Data System (ADS)

    Zamora Ramos, Ernesto

    Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.

  2. Emerging Computer Media: On Image Interaction

    NASA Astrophysics Data System (ADS)

    Lippman, Andrew B.

    1982-01-01

    Emerging technologies such as inexpensive, powerful local computing, optical digital videodiscs, and the technologies of human-machine interaction are initiating a revolution in both image storage systems and image interaction systems. This paper will present a review of new approaches to computer media predicated upon three dimensional position sensing, speech recognition, and high density image storage. Examples will be shown such as the Spatial Data Management Systems wherein the free use of place results in intuitively clear retrieval systems and potentials for image association; the Movie-Map, wherein inherently static media generate dynamic views of data, and conferencing work-in-progress wherein joint processing is stressed. Application to medical imaging will be suggested, but the primary emphasis is on the general direction of imaging and reference systems. We are passing the age of simple possibility of computer graphics and image porcessing and entering the age of ready usability.

  3. Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing

    NASA Technical Reports Server (NTRS)

    Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.

    1995-01-01

    Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.

  4. Molecular Memory of Morphologies by Septins during Neuron Generation Allows Early Polarity Inheritance.

    PubMed

    Boubakar, Leila; Falk, Julien; Ducuing, Hugo; Thoinet, Karine; Reynaud, Florie; Derrington, Edmund; Castellani, Valérie

    2017-08-16

    Transmission of polarity established early during cell lineage history is emerging as a key process guiding cell differentiation. Highly polarized neurons provide a fascinating model to study inheritance of polarity over cell generations and across morphological transitions. Neural crest cells (NCCs) migrate to the dorsal root ganglia to generate neurons directly or after cell divisions in situ. Using live imaging of vertebrate embryo slices, we found that bipolar NCC progenitors lose their polarity, retracting their processes to round for division, but generate neurons with bipolar morphology by emitting processes from the same locations as the progenitor. Monitoring the dynamics of Septins, which play key roles in yeast polarity, indicates that Septin 7 tags process sites for re-initiation of process growth following mitosis. Interfering with Septins blocks this mechanism. Thus, Septins store polarity features during mitotic rounding so that daughters can reconstitute the initial progenitor polarity. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    PubMed

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.

  6. Computational microscopy: illumination coding and nonlinear optimization enables gigapixel 3D phase imaging

    NASA Astrophysics Data System (ADS)

    Tian, Lei; Waller, Laura

    2017-05-01

    Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.

  7. Simultaneous 3D–2D image registration and C-arm calibration: Application to endovascular image-guided interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitrović, Uroš; Pernuš, Franjo; Likar, Boštjan

    Purpose: Three-dimensional to two-dimensional (3D–2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D–2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D–2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3Dmore » image from which registration starts, (3) uncertainty of C-arm’s geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D–2D registration method against a highly accurate reference or “gold standard” registration, performed on clinical image datasets acquired in the context of the intervention. Methods: The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D–2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. Results: Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and template matching and final registration involving C-arm calibration were 36%, 73%, and 93%, respectively, while registration accuracy of 0.59 mm was the best after final registration. By compensating in-plane translation errors by initial template matching, the success rates achieved after the final stage improved consistently for all methods, especially if C-arm calibration was performed simultaneously with the 3D–2D image registration. Conclusions: Because the tested methods perform simultaneous C-arm calibration and 3D–2D registration based solely on anatomical information, they have a high potential for automation and thus for an immediate integration into current interventional workflow. One of the authors’ main contributions is also comprehensive and representative validation performed under realistic conditions as encountered during cerebral EIGI.« less

  8. Concurrent Image Processing Executive (CIPE). Volume 3: User's guide

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.; Kong, Mih-Seh

    1990-01-01

    CIPE (the Concurrent Image Processing Executive) is both an executive which organizes the parameter inputs for hypercube applications and an environment which provides temporary data workspace and simple real-time function definition facilities for image analysis. CIPE provides two types of user interface. The Command Line Interface (CLI) provides a simple command-driven environment allowing interactive function definition and evaluation of algebraic expressions. The menu interface employs a hierarchical screen-oriented menu system where the user is led through a menu tree to any specific application and then given a formatted panel screen for parameter entry. How to initialize the system through the setup function, how to read data into CIPE symbols, how to manipulate and display data through the use of executive functions, and how to run an application in either user interface mode, are described.

  9. VizieR Online Data Catalog: Palomar Transient Factory photometric observations (Arcavi+, 2014)

    NASA Astrophysics Data System (ADS)

    Arcavi, I.; Gal-Yam, A.; Sullivan, M.; Pan, Y.-C.; Cenko, S. B.; Horesh, A.; Ofek, E. O.; De Cia, A.; Yan, L.; Yang, C.-W.; Howell, D. A.; Tal, D.; Kulkarni, S. R.; Tendulkar, S. P.; Tang, S.; Xu, D.; Sternberg, A.; Cohen, J. G.; Bloom, J. S.; Nugent, P. E.; Kasliwal, M. M.; Perley, D. A.; Quimby, R. M.; Miller, A. A.; Theissen, C. A.; Laher, R. R.

    2017-04-01

    All the events from our archival search were discovered by the Palomar 48 inch Oschin Schmidt Telescope (P48) as part of the PTF survey using the Mould R-band filter. We obtained photometric observations in the R and g bands using P48, and in g, r, and i bands with the Palomar 60 inch telescope (P60; Cenko et al. 2006PASP..118.1396C). Initial processing of the P48 images was conducted by the Infrared Processing and Analysis Center (IPAC; Laher et al. 2014PASP..126..674L). Photometry was extracted using a custom PSF fitting routine (e.g., Sullivan et al. 2006AJ....131..960S), which measures the transient flux after image subtraction (using template images taken before the outburst or long after it faded). (1 data file).

  10. The neural basis of functional neuroimaging signal with positron and single-photon emission tomography.

    PubMed

    Sestini, S

    2007-07-01

    Functional imaging techniques such as positron and single-photon emission tomography exploit the relationship between neural activity, energy demand and cerebral blood flow to functionally map the brain. Despite the fact that neurobiological processes are not completely understood, several results have revealed the signals that trigger the metabolic and vascular changes accompanying variations in neural activity. Advances in this field have demonstrated that release of the major excitatory neurotransmitter glutamate initiates diverse signaling processes between neurons, astrocytes and blood perfusion, and that this signaling is crucial for the occurrence of brain imaging signals. Better understanding of the neural sites of energy consumption and the temporal correlation between energy demand, energy consumption and associated cerebrovascular hemodynamics gives novel insight into the potential of these imaging tools in the study of metabolic neurodegenerative disorders.

  11. Investigation of nucleation and growth processes of diamond films by atomic force microscopy

    NASA Technical Reports Server (NTRS)

    George, M. A.; Burger, A.; Collins, W. E.; Davidson, J. L.; Barnes, A. V.; Tolk, N. H.

    1994-01-01

    The nucleation and growth of plasma-enhanced chemical-vapor deposited polycrystalline diamond films were studied using atomic force microscopy (AFM). AFM images were obtained for (1) nucleated diamond films produced from depositions that were terminated during the initial stages of growth, (2) the silicon substrate-diamond film interface side of diamond films (1-4 micrometers thick) removed from the original surface of the substrate, and (3) the cross-sectional fracture surface of the film, including the Si/diamond interface. Pronounced tip effects were observed for early-stage diamond nucleation attributed to tip convolution in the AFM images. AFM images of the film's cross section and interface, however, were not highly affected by tip convolution, and the images indicate that the surface of the silicon substrate is initially covered by a small grained polycrystalline-like film and the formation of this precursor film is followed by nucleation of the diamond film on top of this layer. X-ray photoelectron spectroscopy spectra indicate that some silicon carbide is present in the precursor layer.

  12. An interactive toolbox for atlas-based segmentation and coding of volumetric images

    NASA Astrophysics Data System (ADS)

    Menegaz, G.; Luti, S.; Duay, V.; Thiran, J.-Ph.

    2007-03-01

    Medical imaging poses the great challenge of having compression algorithms that are lossless for diagnostic and legal reasons and yet provide high compression rates for reduced storage and transmission time. The images usually consist of a region of interest representing the part of the body under investigation surrounded by a "background", which is often noisy and not of diagnostic interest. In this paper, we propose a ROI-based 3D coding system integrating both the segmentation and the compression tools. The ROI is extracted by an atlas based 3D segmentation method combining active contours with information theoretic principles, and the resulting segmentation map is exploited for ROI based coding. The system is equipped with a GUI allowing the medical doctors to supervise the segmentation process and eventually reshape the detected contours at any point. The process is initiated by the user through the selection of either one pre-de.ned reference image or one image of the volume to be used as the 2D "atlas". The object contour is successively propagated from one frame to the next where it is used as the initial border estimation. In this way, the entire volume is segmented based on a unique 2D atlas. The resulting 3D segmentation map is exploited for adaptive coding of the different image regions. Two coding systems were considered: the JPEG3D standard and the 3D-SPITH. The evaluation of the performance with respect to both segmentation and coding proved the high potential of the proposed system in providing an integrated, low-cost and computationally effective solution for CAD and PAC systems.

  13. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization

    NASA Astrophysics Data System (ADS)

    Wang, Jianing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.

    2017-02-01

    Medical image registration establishes a correspondence between images of biological structures and it is at the core of many applications. Commonly used deformable image registration methods are dependent on a good preregistration initialization. The initialization can be performed by localizing homologous landmarks and calculating a point-based transformation between the images. The selection of landmarks is however important. In this work, we present a learning-based method to automatically find a set of robust landmarks in 3D MR image volumes of the head to initialize non-rigid transformations. To validate our method, these selected landmarks are localized in unknown image volumes and they are used to compute a smoothing thin-plate splines transformation that registers the atlas to the volumes. The transformed atlas image is then used as the preregistration initialization of an intensity-based non-rigid registration algorithm. We show that the registration accuracy of this algorithm is statistically significantly improved when using the presented registration initialization over a standard intensity-based affine registration.

  14. Magnetic resonance imaging and image analysis for assessment of HPMC matrix tablets structural evolution in USP Apparatus 4.

    PubMed

    Kulinowski, Piotr; Dorożyński, Przemysław; Młynarczyk, Anna; Węglarz, Władysław P

    2011-05-01

    The purpose of the study was to present a methodology for the processing of Magnetic Resonance Imaging (MRI) data for the quantification of the dosage form matrix evolution during drug dissolution. The results of the study were verified by comparison with other approaches presented in literature. A commercially available, HPMC-based quetiapine fumarate tablet was studied with a 4.7T MR system. Imaging was performed inside an MRI probe-head coupled with a flow-through cell for 12 h in circulating water. The images were segmented into three regions using threshold-based segmentation algorithms due to trimodal structure of the image intensity histograms. Temporal evolution of dry glassy, swollen glassy and gel regions was monitored. The characteristic features were observed: initial high expansion rate of the swollen glassy and gel layers due to initial water uptake, dry glassy core disappearance and maximum area of swollen glassy region at 4 h, and subsequent gel layer thickness increase at the expense of swollen glassy layer. The temporal evolution of an HPMC-based tablet by means of noninvasive MRI integrated with USP Apparatus 4 was found to be consistent with both the theoretical model based on polymer disentanglement concentration and experimental VIS/FTIR studies.

  15. Automated image processing and analysis of cartilage MRI: enabling technology for data mining applied to osteoarthritis

    PubMed Central

    Tameem, Hussain Z.; Sinha, Usha S.

    2011-01-01

    Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features. PMID:21785520

  16. Automated image processing and analysis of cartilage MRI: enabling technology for data mining applied to osteoarthritis

    NASA Astrophysics Data System (ADS)

    Tameem, Hussain Z.; Sinha, Usha S.

    2007-11-01

    Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features.

  17. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    PubMed Central

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  18. Compiled visualization with IPI method for analysing of liquid liquid mixing process

    NASA Astrophysics Data System (ADS)

    Jasikova, Darina; Kotek, Michal; Kysela, Bohus; Sulc, Radek; Kopecky, Vaclav

    2018-06-01

    The article deals with the research of mixing process using visualization techniques and IPI method. Characteristics of the size distribution and the evolution of two liquid-liquid phase's disintegration were studied. A methodology has been proposed for visualization and image analysis of data acquired during the initial phase of the mixing process. IPI method was used for subsequent detailed study of the disintegrated droplets. The article describes advantages of usage of appropriate method, presents the limits of each method, and compares them.

  19. Bouguer Images of the North American Craton

    NASA Technical Reports Server (NTRS)

    Arvidson, R. E.; Bindschadler, D.; Bowring, S.; Eddy, M.; Guinness, E.; Leff, C.

    1985-01-01

    Processing of existing gravity and aeromagnetic data with modern methods is providing new insights into crustal and mantle structures for large parts of the United States and Canada. More than three-quarters of a million ground station readings of gravity are now available for this region. These data offer a wealth of information on crustal and mantle structures when reduced and displayed as Bouguer anomalies, where lateral variations are controlled by the size, shape and densities of underlying materials. Digital image processing techniques were used to generate Bouguer images that display more of the granularity inherent in the data as compared with existing contour maps. A dominant NW-SE linear trend of highs and lows can be seen extending from South Dakota, through Nebaska, and into Missouri. This trend is probably related to features created during an early and perhaps initial episode of crustal assembly by collisional processes. The younger granitic materials are probably a thin cover over an older crust.

  20. Serial grouping of 2D-image regions with object-based attention in humans.

    PubMed

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-06-13

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.

  1. The utility of repeat sestamibi scans in patients with primary hyperparathyroidism after an initial negative scan.

    PubMed

    Krishnamurthy, Vikram D; Sound, Sara; Okoh, Alexis K; Yazici, Pinar; Yigitbas, Hakan; Neumann, Donald; Doshi, Krupa; Berber, Eren

    2017-06-01

    We analyzed the utility of repeated sestambi scans in patients with primary hyperparathyroidism and its effects on operative referral. We carried out a retrospective review of patients with primary hyperparathyroidism who underwent repeated sestambi scans exclusively within our health system between 1996-2015. Patient demographic, presentation, laboratory, imaging, operative, and pathologic data were reviewed. Univariate analysis with JMP Pro v12 was used to identify factors associated with conversion from an initial negative to a subsequent positive scan. After exclusion criteria (including reoperations), we identified 49 patients in whom 59% (n = 29) of subsequent scans remained negative and 41% (n = 20) converted to positive. Factors associated with an initial negative to a subsequent positive scan included classic presentation and second scans with iodine subtraction (P = .04). Nonsurgeons were less likely to order an iodine-subtraction scan (P < .05). Fewer patients with negative imaging were referred to surgery (33% vs 100%, P = .005), and median time to operation after the first negative scan was 25 months (range 1.4-119). Surgeon-performed ultrasonography had greater sensitivity and positive predictive value than repeated sestamibi scans. Negative sestambi scans decreased and delayed operative referral. Consequently, we identified several process improvement initiatives, including education regarding superior institutional imaging. Combining all findings, we created an algorithm for evaluating patients with primary hyperparathyroidism after initially negative sestamibi scans, which incorporates surgeon-performed ultrasonography. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  3. The imaging 3.0 informatics scorecard.

    PubMed

    Kohli, Marc; Dreyer, Keith J; Geis, J Raymond

    2015-04-01

    Imaging 3.0 is a radiology community initiative to empower radiologists to create and demonstrate value for their patients, referring physicians, and health systems. In image-guided health care, radiologists contribute to the entire health care process, well before and after the actual examination, and out to the point at which they guide clinical decisions and affect patient outcome. Because imaging is so pervasive, radiologists who adopt Imaging 3.0 concepts in their practice can help their health care systems provide consistently high-quality care at reduced cost. By doing this, radiologists become more valuable in the new health care setting. The authors describe how informatics is critical to embracing Imaging 3.0 and present a scorecard that can be used to gauge a radiology group's informatics resources and capabilities. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  4. From Still Photo to Animated Images

    ERIC Educational Resources Information Center

    Moos, Lejf

    2005-01-01

    In the "Leadership for Learning" project we have collaborated with participating schools in writing an initial portrait of each school. Based on official descriptions and interviews with stakeholders in the developmental process we have constructed a narrative or a portrait of the school's and stakeholders' conceptions of learning and leadership…

  5. Local Surface Reconstruction from MER images using Stereo Workstation

    NASA Astrophysics Data System (ADS)

    Shin, Dongjoe; Muller, Jan-Peter

    2010-05-01

    The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL-HRSC reconstruction workflow. This algorithm's performance is reasonable even for close-range imagery so long as the stereo -pair does not too large a baseline displacement. For post-processing, a Bundle Adjustment (BA) is used to optimise the initial calibration parameters, which bootstrap the reconstruction results. Amongst many options for the non-linear optimisation, the LMA has been adopted due to its stability so that the BA searches the best calibration parameters whilst iteratively minimising the re-projection errors of the initial reconstruction points. For the evaluation of the proposed method, the result of the method is compared with the reconstruction from a disparity map provided by JPL using their operational processing system. Visual and quantitative comparison will be presented as well as updated camera parameters. As part of future work, we will investigate a method expediting the processing speed of the stereo region growing process and look into the possibility of extending the use of the stereo workstation to orbital image processing. Such an interactive stereo workstation can also be used to digitize points and line features as well as assess the accuracy of stereo processed results produced from other stereo matching algorithms available from within the consortium and elsewhere. It can also provide "ground truth" when suitably refined for stereo matching algorithms as well as provide visual cues as to why these matching algorithms sometimes fail to mitigate this in the future. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 218814 "PRoVisG".

  6. Analysis of base fuze functioning of HESH ammunitions through high-speed photographic technique

    NASA Astrophysics Data System (ADS)

    Biswal, T. K.

    2007-01-01

    High-speed photography plays a major role in a Test Range where the direct access is possible through imaging in order to understand a dynamic process thoroughly and both qualitative and quantitative data are obtained thereafter through image processing and analysis. In one of the trials it was difficult to understand the performance of HESH ammunitions on rolled homogeneous armour. There was no consistency in scab formation even though all other parameters like propellant charge mass, charge temperature, impact velocity etc are maintained constant. To understand the event thoroughly high-speed photography was deployed to have a frontal view of the total process. Clear information of shell impact, embedding of HE propellant on armour and base fuze initiation are obtained. In case of scab forming rounds these three processes are clearly observed in sequence. However in non-scab ones base fuze is initiated before the completion of the embedding process resulting non-availability of threshold thrust on to the armour to cause scab. This has been revealed in two rounds where there was a failure of scab formation. As a quantitative measure, fuze delay was calculated for each round and there after premature functioning of base fuze was ascertained in case of non-scab rounds. Such potency of high-speed photography has been depicted in details in this paper.

  7. Automatic detection of the inner ears in head CT images using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Dongqing; Noble, Jack H.; Dawant, Benoit M.

    2018-03-01

    Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to stimulate nerve endings to replace the natural electro-mechanical transduction mechanism and restore hearing for patients with profound hearing loss. Post-operatively, the CI needs to be programmed. Traditionally, this is done by an audiologist who is blind to the positions of the electrodes relative to the cochlea and relies on the patient's subjective response to stimuli. This is a trial-and-error process that can be frustratingly long (dozens of programming sessions are not unusual). To assist audiologists, we have proposed what we call IGCIP for image-guided cochlear implant programming. In IGCIP, we use image processing algorithms to segment the intra-cochlear anatomy in pre-operative CT images and to localize the electrode arrays in post-operative CTs. We have shown that programming strategies informed by image-derived information significantly improve hearing outcomes for both adults and pediatric populations. We are now aiming at deploying these techniques clinically, which requires full automation. One challenge we face is the lack of standard image acquisition protocols. The content of the image volumes we need to process thus varies greatly and visual inspection and labelling is currently required to initialize processing pipelines. In this work we propose a deep learning-based approach to automatically detect if a head CT volume contains two ears, one ear, or no ear. Our approach has been tested on a data set that contains over 2,000 CT volumes from 153 patients and we achieve an overall 95.97% classification accuracy.

  8. Automatic graph-cut based segmentation of bones from knee magnetic resonance images for osteoarthritis research.

    PubMed

    Ababneh, Sufyan Y; Prescott, Jeff W; Gurcan, Metin N

    2011-08-01

    In this paper, a new, fully automated, content-based system is proposed for knee bone segmentation from magnetic resonance images (MRI). The purpose of the bone segmentation is to support the discovery and characterization of imaging biomarkers for the incidence and progression of osteoarthritis, a debilitating joint disease, which affects a large portion of the aging population. The segmentation algorithm includes a novel content-based, two-pass disjoint block discovery mechanism, which is designed to support automation, segmentation initialization, and post-processing. The block discovery is achieved by classifying the image content to bone and background blocks according to their similarity to the categories in the training data collected from typical bone structures. The classified blocks are then used to design an efficient graph-cut based segmentation algorithm. This algorithm requires constructing a graph using image pixel data followed by applying a maximum-flow algorithm which generates a minimum graph-cut that corresponds to an initial image segmentation. Content-based refinements and morphological operations are then applied to obtain the final segmentation. The proposed segmentation technique does not require any user interaction and can distinguish between bone and highly similar adjacent structures, such as fat tissues with high accuracy. The performance of the proposed system is evaluated by testing it on 376 MR images from the Osteoarthritis Initiative (OAI) database. This database included a selection of single images containing the femur and tibia from 200 subjects with varying levels of osteoarthritis severity. Additionally, a full three-dimensional segmentation of the bones from ten subjects with 14 slices each, and synthetic images with background having intensity and spatial characteristics similar to those of bone are used to assess the robustness and consistency of the developed algorithm. The results show an automatic bone detection rate of 0.99 and an average segmentation accuracy of 0.95 using the Dice similarity index. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. 4D multiple-cathode ultrafast electron microscopy

    PubMed Central

    Baskin, John Spencer; Liu, Haihua; Zewail, Ahmed H.

    2014-01-01

    Four-dimensional multiple-cathode ultrafast electron microscopy is developed to enable the capture of multiple images at ultrashort time intervals for a single microscopic dynamic process. The dynamic process is initiated in the specimen by one femtosecond light pulse and probed by multiple packets of electrons generated by one UV laser pulse impinging on multiple, spatially distinct, cathode surfaces. Each packet is distinctly recorded, with timing and detector location controlled by the cathode configuration. In the first demonstration, two packets of electrons on each image frame (of the CCD) probe different times, separated by 19 picoseconds, in the evolution of the diffraction of a gold film following femtosecond heating. Future elaborations of this concept to extend its capabilities and expand the range of applications of 4D ultrafast electron microscopy are discussed. The proof-of-principle demonstration reported here provides a path toward the imaging of irreversible ultrafast phenomena of materials, and opens the door to studies involving the single-frame capture of ultrafast dynamics using single-pump/multiple-probe, embedded stroboscopic imaging. PMID:25006261

  10. 4D multiple-cathode ultrafast electron microscopy.

    PubMed

    Baskin, John Spencer; Liu, Haihua; Zewail, Ahmed H

    2014-07-22

    Four-dimensional multiple-cathode ultrafast electron microscopy is developed to enable the capture of multiple images at ultrashort time intervals for a single microscopic dynamic process. The dynamic process is initiated in the specimen by one femtosecond light pulse and probed by multiple packets of electrons generated by one UV laser pulse impinging on multiple, spatially distinct, cathode surfaces. Each packet is distinctly recorded, with timing and detector location controlled by the cathode configuration. In the first demonstration, two packets of electrons on each image frame (of the CCD) probe different times, separated by 19 picoseconds, in the evolution of the diffraction of a gold film following femtosecond heating. Future elaborations of this concept to extend its capabilities and expand the range of applications of 4D ultrafast electron microscopy are discussed. The proof-of-principle demonstration reported here provides a path toward the imaging of irreversible ultrafast phenomena of materials, and opens the door to studies involving the single-frame capture of ultrafast dynamics using single-pump/multiple-probe, embedded stroboscopic imaging.

  11. Impact of local electrostatic field rearrangement on field ionization

    NASA Astrophysics Data System (ADS)

    Katnagallu, Shyam; Dagan, Michal; Parviainen, Stefan; Nematollahi, Ali; Grabowski, Blazej; Bagot, Paul A. J.; Rolland, Nicolas; Neugebauer, Jörg; Raabe, Dierk; Vurpillot, François; Moody, Michael P.; Gault, Baptiste

    2018-03-01

    Field ion microscopy allows for direct imaging of surfaces with true atomic resolution. The high charge density distribution on the surface generates an intense electric field that can induce ionization of gas atoms. We investigate the dynamic nature of the charge and the consequent electrostatic field redistribution following the departure of atoms initially constituting the surface in the form of an ion, a process known as field evaporation. We report on a new algorithm for image processing and tracking of individual atoms on the specimen surface enabling quantitative assessment of shifts in the imaged atomic positions. By combining experimental investigations with molecular dynamics simulations, which include the full electric charge, we confirm that change is directly associated with the rearrangement of the electrostatic field that modifies the imaging gas ionization zone. We derive important considerations for future developments of data reconstruction in 3D field ion microscopy, in particular for precise quantification of lattice strains and characterization of crystalline defects at the atomic scale.

  12. Soft computing approach to 3D lung nodule segmentation in CT.

    PubMed

    Badura, P; Pietka, E

    2014-10-01

    This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Using a Smartphone Camera for Nanosatellite Attitude Determination

    NASA Astrophysics Data System (ADS)

    Shimmin, R.

    2014-09-01

    The PhoneSat project at NASA Ames Research Center has repeatedly flown a commercial cellphone in space. As this project continues, additional utility is being extracted from the cell phone hardware to enable more complex missions. The camera in particular shows great potential as an instrument for position and attitude determination, but this requires complex image processing. This paper outlines progress towards that image processing capability. Initial tests on a small collection of sample images have demonstrated the determination of a Moon vector from an image by automatic thresholding and centroiding, allowing the calibration of existing attitude control systems. Work has been undertaken on a further set of sample images towards horizon detection using a variety of techniques including thresholding, edge detection, applying a Hough transform, and circle fitting. Ultimately it is hoped this will allow calculation of an Earth vector for attitude determination and an approximate altitude. A quick discussion of work towards using the camera as a star tracker is then presented, followed by an introduction to further applications of the camera on space missions.

  14. Attack to AN Image Encryption Based on Chaotic Logistic Map

    NASA Astrophysics Data System (ADS)

    Wang, Xing-Yuan; Chen, Feng; Wang, Tian; Xu, Dahai; Ma, Yutian

    2013-10-01

    This paper offers two different attacks on a freshly proposed image encryption based on chaotic logistic map. The cryptosystem under study first uses a secret key of 80-bit and employed two chaotic logistic maps. We derived the initial conditions of the logistic maps from using the secret key by providing different weights to all its bits. Additionally, in this paper eight different types of procedures are used to encrypt the pixels of an image in the proposed encryption process of which one of them will be used for a certain pixel which is determined by the product of the logistic map. The secret key is revised after encrypting each block which consisted of 16 pixels of the image. The encrypting process have weakness, worst of which is that every byte of plaintext is independent when substituted, so the cipher text of the byte will not change even the other bytes have changed. As a result of weakness, a chosen plaintext attack and a chosen cipher text attack can be completed without any knowledge of the key value to recuperate the ciphered image.

  15. G0-WISHART Distribution Based Classification from Polarimetric SAR Images

    NASA Astrophysics Data System (ADS)

    Hu, G. C.; Zhao, Q. H.

    2017-09-01

    Enormous scientific and technical developments have been carried out to further improve the remote sensing for decades, particularly Polarimetric Synthetic Aperture Radar(PolSAR) technique, so classification method based on PolSAR images has getted much more attention from scholars and related department around the world. The multilook polarmetric G0-Wishart model is a more flexible model which describe homogeneous, heterogeneous and extremely heterogeneous regions in the image. Moreover, the polarmetric G0-Wishart distribution dose not include the modified Bessel function of the second kind. It is a kind of simple statistical distribution model with less parameter. To prove its feasibility, a process of classification has been tested with the full-polarized Synthetic Aperture Radar (SAR) image by the method. First, apply multilook polarimetric SAR data process and speckle filter to reduce speckle influence for classification result. Initially classify the image into sixteen classes by H/A/α decomposition. Using the ICM algorithm to classify feature based on the G0-Wshart distance. Qualitative and quantitative results show that the proposed method can classify polaimetric SAR data effectively and efficiently.

  16. Cat-scratch disease presenting as multiple hepatic lesions: case report and literature review.

    PubMed

    Baptista, Mariana Andrade; Lo, Denise Swei; Hein, Noely; Hirose, Maki; Yoshioka, Cristina Ryoka Miyao; Ragazzi, Selma Lopes Betta; Gilio, Alfredo Elias; Ferronato, Angela Esposito

    2014-01-01

    Although infectious diseases are the most prevalent cause of fevers of unknown origin (FUO), this diagnosis remains challenging in some pediatric patients. Imaging exams, such as computed tomography (CT) are frequently required during the diagnostic processes. The presence of multiple hypoattenuating scattered images throughout the liver associated with the history of cohabitation with cats should raise the suspicion of the diagnosis of cat-scratch disease (CSD), although the main etiologic agent of liver abscesses in childhood is S taphylococcus aureus . Differential diagnosis by clinical and epidemiological data with Bartonella henselae is often advisable. The authors report the case of a boy aged 2 years and 9 months with 16-day history of daily fever accompanied by intermittent abdominal pain. Physical examination was unremarkable. Abdominal ultrasound performed in the initial work up was unrevealing, but an abdominal CT that was performed afterwards disclosed multiple hypoattenuating hepatic images compatible with the diagnosis of micro abscesses. Initial antibiotic regimen included cefotaxime, metronidazole, and oxacillin. Due to the epidemiology of close contact with kittens, diagnosis of CSD was considered and confirmed by serologic tests. Therefore, the initial antibiotics were replaced by clarithromycin orally for 14 days followed by fever defervescence and clinical improvement. The authors call attention to this uncommon diagnosis in a child presenting with FUO and multiple hepatic images suggestive of micro abscesses.

  17. A Dark Energy Camera Search for an Optical Counterpart to the First Advanced LIGO Gravitational Wave Event GW150914

    DOE PAGES

    Soares-Santos, M.; Kessler, R.; Berger, E.; ...

    2016-05-27

    We report initial results of a deep search for an optical counterpart to the gravitational wave event GW150914, the first trigger from the Advanced LIGO gravitational wave detectors. We used the Dark Energy Camera (DECam) to image a 102 degmore » $^2$ area, corresponding to 38% of the initial trigger high-probability sky region and to 11% of the revised high-probability region. We observed in i and z bands at 4-5, 7, and 24 days after the trigger. The median $$5\\sigma$$ point-source limiting magnitudes of our search images are i=22.5 and z=21.8 mag. We processed the images through a difference-imaging pipeline using templates from pre-existing Dark Energy Survey data and publicly available DECam data. Due to missing template observations and other losses, our effective search area subtends 40 deg$$^{2}$$, corresponding to 12% total probability in the initial map and 3% of the final map. In this area, we search for objects that decline significantly between days 4-5 and day 7, and are undetectable by day 24, finding none to typical magnitude limits of i= 21.5,21.1,20.1 for object colors (i-z)=1,0,-1, respectively. Our search demonstrates the feasibility of a dedicated search program with DECam and bodes well for future research in this emerging field.« less

  18. Ship Detection in SAR Image Based on the Alpha-stable Distribution

    PubMed Central

    Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng

    2008-01-01

    This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794

  19. A hybrid method for synthetic aperture ladar phase-error compensation

    NASA Astrophysics Data System (ADS)

    Hua, Zhili; Li, Hongping; Gu, Yongjian

    2009-07-01

    As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.

  20. Endoscopic fluorescence imaging for early assessment of anastomotic recurrence of Crohn's disease

    NASA Astrophysics Data System (ADS)

    Mordon, Serge R.; Maunoury, Vincent; Geboes, K.; Klein, Olivier; Desreumaux, P.; Debaert, A.; Colombel, Jean-Frederic

    1999-02-01

    Crohn's disease is an inflammatory bowel disease of unknown etiology. The mechanism of the initial mucosal alterations is still unclear: ulcerations overlying lymphoid follicles and/or vasculitis have been proposed as the early lesions. We have developed a new and original method combining endoscopy of fluorescence angiography for identifying the early pathological lesions, occurring in the neo-terminal ileum after right ileocolonic resection. The patient population consisted of 10 subjects enrolled in a prospective protocol of endoscopic follow-up at 3 and 12 months after surgery. Fluorescence imaging showed small spots giving a bright fluorescence distributed singly in mucosa which appeared normal in routine endoscopy. Histopathological examination demonstrated that the fluorescence of small spots originated from small, usually superficial, erosive lesions. In several cases, these erosive lesions occurred over lymphoid follicles. Endoscopic fluorescence imaging provides a suitable means of investigating the initial aspect of the Crohn's disease process in displaying some correlative findings between fluorescent aspects and early pathological mucosal alterations.

  1. Sodium 3D COncentration MApping (COMA 3D) using 23Na and proton MRI

    NASA Astrophysics Data System (ADS)

    Truong, Milton L.; Harrington, Michael G.; Schepkin, Victor D.; Chekmenev, Eduard Y.

    2014-10-01

    Functional changes of sodium 3D MRI signals were converted into millimolar concentration changes using an open-source fully automated MATLAB toolbox. These concentration changes are visualized via 3D sodium concentration maps, and they are overlaid over conventional 3D proton images to provide high-resolution co-registration for easy correlation of functional changes to anatomical regions. Nearly 5000/h concentration maps were generated on a personal computer (ca. 2012) using 21.1 T 3D sodium MRI brain images of live rats with spatial resolution of 0.8 × 0.8 × 0.8 mm3 and imaging matrices of 60 × 60 × 60. The produced concentration maps allowed for non-invasive quantitative measurement of in vivo sodium concentration in the normal rat brain as a functional response to migraine-like conditions. The presented work can also be applied to sodium-associated changes in migraine, cancer, and other metabolic abnormalities that can be sensed by molecular imaging. The MATLAB toolbox allows for automated image analysis of the 3D images acquired on the Bruker platform and can be extended to other imaging platforms. The resulting images are presented in a form of series of 2D slices in all three dimensions in native MATLAB and PDF formats. The following is provided: (a) MATLAB source code for image processing, (b) the detailed processing procedures, (c) description of the code and all sub-routines, (d) example data sets of initial and processed data. The toolbox can be downloaded at: http://www.vuiis.vanderbilt.edu/ truongm/COMA3D/.

  2. Image navigation and registration performance assessment tool set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Astrophysics Data System (ADS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-05-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  3. Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  4. Image Navigation and Registration Performance Assessment Tool Set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  5. Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.

    PubMed

    Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi

    2017-06-08

    A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.

  6. Image Processing of Porous Silicon Microarray in Refractive Index Change Detection

    PubMed Central

    Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi

    2017-01-01

    A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383

  7. Automated processing of label-free Raman microscope images of macrophage cells with standardized regression for high-throughput analysis.

    PubMed

    Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I

    2010-11-19

    Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without compromise in image quality or information loss in associated spectra. These results motivate further use of label free microscopy techniques in real-time imaging of live immune cells.

  8. Reengineering the picture archiving and communication system (PACS) process for digital imaging networks PACS.

    PubMed

    Horton, M C; Lewis, T E; Kinsey, T V

    1999-05-01

    Prior to June 1997, military picture archiving and communications systems (PACS) were planned, procured, and installed with key decisions on the system, equipment, and even funding sources made through a research and development office called Medical Diagnostic Imaging Systems (MDIS). Beginning in June 1997, the Joint Imaging Technology Project Office (JITPO) initiated a collaborative and consultative process for planning and implementing PACS into military treatment facilities through a new Department of Defense (DoD) contract vehicle called digital imaging networks (DIN)-PACS. The JITPO reengineered this process incorporating multiple organizations and politics. The reengineered PACS process administered through the JITPO transformed the decision process and accountability from a single office to a consultative method that increased end-user knowledge, responsibility, and ownership in PACS. The JITPO continues to provide information and services that assist multiple groups and users in rendering PACS planning and implementation decisions. Local site project managers are involved from the outset and this end-user collaboration has made the sometimes difficult transition to PACS an easier and more acceptable process for all involved. Corporately, this process saved DoD sites millions by having PACS plans developed within the government and proposed to vendors second, and then having vendors respond specifically to those plans. The integrity and efficiency of the process have reduced the opportunity for implementing nonstandard systems while sharing resources and reducing wasted government dollars. This presentation will describe the chronology of changes, encountered obstacles, and lessons learned within the reengineering of the PACS process for DIN-PACS.

  9. Carotid artery B-mode ultrasound image segmentation based on morphology, geometry and gradient direction

    NASA Astrophysics Data System (ADS)

    Sunarya, I. Made Gede; Yuniarno, Eko Mulyanto; Purnomo, Mauridhi Hery; Sardjono, Tri Arief; Sunu, Ismoyo; Purnama, I. Ketut Eddy

    2017-06-01

    Carotid Artery (CA) is one of the vital organs in the human body. CA features that can be used are position, size and volume. Position feature can used to determine the preliminary initialization of the tracking. Examination of the CA features can use Ultrasound. Ultrasound imaging can be operated dependently by an skilled operator, hence there could be some differences in the images result obtained by two or more different operators. This can affect the process of determining of CA. To reduce the level of subjectivity among operators, it can determine the position of the CA automatically. In this study, the proposed method is to segment CA in B-Mode Ultrasound Image based on morphology, geometry and gradient direction. This study consists of three steps, the data collection, preprocessing and artery segmentation. The data used in this study were taken directly by the researchers and taken from the Brno university's signal processing lab database. Each data set contains 100 carotid artery B-Mode ultrasound image. Artery is modeled using ellipse with center c, major axis a and minor axis b. The proposed method has a high value on each data set, 97% (data set 1), 73 % (data set 2), 87% (data set 3). This segmentation results will then be used in the process of tracking the CA.

  10. Analysis of urban regions using AVHRR thermal infrared data

    USGS Publications Warehouse

    Wright, Bruce

    1993-01-01

    Using 1-km AVHRR satellite data, relative temperature difference caused by conductivity and inertia were used to distinguish urban and non urban land covers. AVHRR data that were composited on a biweekly basis and distributed by the EROS Data Center in Sioux Falls, South Dakota, were used for the classification process. These composited images are based on the maximum normalized different vegetation index (NDVI) of each pixel during the 2-week period using channels 1 and 2. The resultant images are nearly cloud-free and reduce the need for extensive reclassification processing. Because of the physiographic differences between the Eastern and Western United States, the initial study was limited to the eastern half of the United States. In the East, the time of maximum difference between the urban surfaces and the vegetated non urban areas is the peak greenness period in late summer. A composite image of the Eastern United States for the 2-weel time period from August 30-Septmeber 16, 1991, was used for the extraction of the urban areas. Two channels of thermal data (channels 3 and 4) normalized for regional temperature differences and a composited NDVI image were classified using conventional image processing techniques. The results compare favorably with other large-scale urban area delineations.

  11. Real-Time X-ray Imaging Reveals Interfacial Growth, Suppression, and Dissolution of Zinc Dendrites Dependent on Anions of Ionic Liquid Additives for Rechargeable Battery Applications.

    PubMed

    Song, Yuexian; Hu, Jiugang; Tang, Jia; Gu, Wanmiao; He, Lili; Ji, Xiaobo

    2016-11-23

    The dynamic interfacial growth, suppression, and dissolution of zinc dendrites have been studied with the imidazolium ionic liquids (ILs) as additives on the basis of in situ synchrotron radiation X-ray imaging. The phase contrast difference of real-time images indicates that zinc dendrites are preferentially developed on the substrate surface in the ammoniacal electrolytes. After adding imidazolium ILs, both nucleation overpotential and polarization extent increase in the order of additive-free < EMI-Cl < EMI-PF 6 < EMI-TFSA < EMI-DCA. The real-time X-ray images show that the EMI-Cl can suppress zinc dendrites, but result in the formation of the loose deposits. The EMI-PF 6 and EMI-TFSA additives can smooth the deposit morphology through suppressing the initiation and growth of dendritic zinc. The addition of EMI-DCA increases the number of dendrite initiation sites, whereas it decreases the growth rate of dendrites. Furthermore, the dissolution behaviors of zinc deposits are compared. The zinc dendrites show a slow dissolution process in the additive-free electrolyte, whereas zinc deposits are easily detached from the substrate in the presence of EMI-Cl, EMI-PF 6 , or EMI-TFSA due to the formation of the loose structure. Hence, the dependence of zinc dendrites on anions of imidazolium IL additives during both electrodeposition and dissolution processes has been elucidated. These results could provide the valuable information in perfecting the performance of zinc-based rechargeable batteries.

  12. Transverse Phase Space Reconstruction and Emittance Measurement of Intense Electron Beams using a Tomography Technique

    NASA Astrophysics Data System (ADS)

    Stratakis, D.; Kishek, R. A.; Li, H.; Bernal, S.; Walter, M.; Tobin, J.; Quinn, B.; Reiser, M.; O'Shea, P. G.

    2006-11-01

    Tomography is the technique of reconstructing an image from its projections. It is widely used in the medical community to observe the interior of the human body by processing multiple x-ray images taken at different angles, A few pioneering researchers have adapted tomography to reconstruct detailed phase space maps of charged particle beams. Some questions arise regarding the limitations of tomography technique for space charge dominated beams. For instance is the linear space charge force a valid approximation? Does tomography equally reproduce phase space for complex, experimentally observed, initial particle distributions? Does tomography make any assumptions about the initial distribution? This study explores the use of accurate modeling with the particle-in-cell code WARP to address these questions, using a wide range of different initial distributions in the code. The study also includes a number of experimental results on tomographic phase space mapping performed on the University of Maryland Electron Ring (UMER).

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doug Blankenship

    PDFs of seismic reflection profiles 101,110, 111 local to the West Flank FORGE site. 45 line kilometers of seismic reflection data are processed data collected in 2001 through the use of vibroseis trucks. The initial analysis and interpretation of these data was performed by Unruh et al. (2001). Optim processed these data by inverting the P-wave first arrivals to create a 2-D velocity structure. Kirchhoff images were then created for each line using velocity tomograms (Unruh et al., 2001).

  14. Percutaneous Thermal Ablation with Ultrasound Guidance. Fusion Imaging Guidance to Improve Conspicuity of Liver Metastasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros

    PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time requiredmore » for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.« less

  15. Compound image segmentation of published biomedical figures.

    PubMed

    Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit

    2018-04-01

    Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.

  16. Osteoarthritis Severity Determination using Self Organizing Map Based Gabor Kernel

    NASA Astrophysics Data System (ADS)

    Anifah, L.; Purnomo, M. H.; Mengko, T. L. R.; Purnama, I. K. E.

    2018-02-01

    The number of osteoarthritis patients in Indonesia is enormous, so early action is needed in order for this disease to be handled. The aim of this paper to determine osteoarthritis severity based on x-ray image template based on gabor kernel. This research is divided into 3 stages, the first step is image processing that is using gabor kernel. The second stage is the learning stage, and the third stage is the testing phase. The image processing stage is by normalizing the image dimension to be template to 50 □ 200 image. Learning stage is done with parameters initial learning rate of 0.5 and the total number of iterations of 1000. The testing stage is performed using the weights generated at the learning stage. The testing phase has been done and the results were obtained. The result shows KL-Grade 0 has an accuracy of 36.21%, accuracy for KL-Grade 2 is 40,52%, while accuracy for KL-Grade 2 and KL-Grade 3 are 15,52%, and 25,86%. The implication of this research is expected that this research as decision support system for medical practitioners in determining KL-Grade on X-ray images of knee osteoarthritis.

  17. Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images

    NASA Astrophysics Data System (ADS)

    Kamble, V. M.; Bhurchandi, K.

    2018-03-01

    Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.

  18. Real time analysis of self-assembled InAs/GaAs quantum dot growth by probing reflection high-energy electron diffraction chevron image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudo, Takuya; Inoue, Tomoya; Kita, Takashi

    2008-10-01

    Self-assembling process of InAs/GaAs quantum dots has been investigated by analyzing reflection high-energy electron diffraction chevron images reflecting the crystal facet structure surrounding the island. The chevron image shows dramatic changes during the island formation. From the temporal evolution of the chevron tail structure, the self-assembling process has been found to consist of four steps. The initial islands do not show distinct facet structures. Then, the island surface is covered by high-index facets, and this is followed by the formation of stable low-index facets. Finally, the flow of In atoms from the islands occurs, which contributes to flatten the wettingmore » layer. Furthermore, we have investigated the island shape evolution during the GaAs capping layer growth by using the same real-time analysis technique.« less

  19. Shrink-wrapped isosurface from cross sectional images

    PubMed Central

    Choi, Y. K.; Hahn, J. K.

    2010-01-01

    Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361

  20. Plant features measurements for robotics

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1989-01-01

    Initial studies of the technical feasibility of using machine vision and color image processing to measure plant health were performed. Wheat plants were grown in nutrient solutions deficient in nitrogen, potassium, and iron. An additional treatment imposed water stress on wheat plants which received a full complement of nutrients. The results for juvenile (less than 2 weeks old) wheat plants show that imaging technology can be used to detect nutrient deficiencies. The relative amount of green color in a leaf declined with increased water stress. The absolute amount of green was higher for nitrogen deficient leaves compared to the control plants. Relative greenness was lower for iron deficient leaves, but the absolute green values were higher. The data showed patterns across the leaf consistent with visual symptons. The development of additional color image processing routines to recognize these patterns would improve the performance of this sensor of plant health.

  1. X-ray Crystal Structures Elucidate the Nucleotidyl Transfer Reaction of Transcript Initiation Using Two Nucleotides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M Gleghorn; E Davydova; R Basu

    2011-12-31

    We have determined the X-ray crystal structures of the pre- and postcatalytic forms of the initiation complex of bacteriophage N4 RNA polymerase that provide the complete set of atomic images depicting the process of transcript initiation by a single-subunit RNA polymerase. As observed during T7 RNA polymerase transcript elongation, substrate loading for the initiation process also drives a conformational change of the O helix, but only the correct base pairing between the +2 substrate and DNA base is able to complete the O-helix conformational transition. Substrate binding also facilitates catalytic metal binding that leads to alignment of the reactive groupsmore » of substrates for the nucleotidyl transfer reaction. Although all nucleic acid polymerases use two divalent metals for catalysis, they differ in the requirements and the timing of binding of each metal. In the case of bacteriophage RNA polymerase, we propose that catalytic metal binding is the last step before the nucleotidyl transfer reaction.« less

  2. CNTRICS Imaging Biomarkers Final Task Selection: Long-Term Memory and Reinforcement Learning

    PubMed Central

    Ragland, John D.; Cohen, Neal J.; Cools, Roshan; Frank, Michael J.; Hannula, Deborah E.; Ranganath, Charan

    2012-01-01

    Functional imaging paradigms hold great promise as biomarkers for schizophrenia research as they can detect altered neural activity associated with the cognitive and emotional processing deficits that are so disabling to this patient population. In an attempt to identify the most promising functional imaging biomarkers for research on long-term memory (LTM), the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS) initiative selected “item encoding and retrieval,” “relational encoding and retrieval,” and “reinforcement learning” as key LTM constructs to guide the nomination process. This manuscript reports on the outcome of the third CNTRICS biomarkers meeting in which nominated paradigms in each of these domains were discussed by a review panel to arrive at a consensus on which of the nominated paradigms could be recommended for immediate translational development. After briefly describing this decision process, information is presented from the nominating authors describing the 4 functional imaging paradigms that were selected for immediate development. In addition to describing the tasks, information is provided on cognitive and neural construct validity, sensitivity to behavioral or pharmacological manipulations, availability of animal models, psychometric characteristics, effects of schizophrenia, and avenues for future development. PMID:22102094

  3. Development of AN All-Purpose Free Photogrammetric Tool

    NASA Astrophysics Data System (ADS)

    González-Aguilera, D.; López-Fernández, L.; Rodriguez-Gonzalvez, P.; Guerrero, D.; Hernandez-Lopez, D.; Remondino, F.; Menna, F.; Nocerino, E.; Toschi, I.; Ballabeni, A.; Gaiani, M.

    2016-06-01

    Photogrammetry is currently facing some challenges and changes mainly related to automation, ubiquitous processing and variety of applications. Within an ISPRS Scientific Initiative a team of researchers from USAL, UCLM, FBK and UNIBO have developed an open photogrammetric tool, called GRAPHOS (inteGRAted PHOtogrammetric Suite). GRAPHOS allows to obtain dense and metric 3D point clouds from terrestrial and UAV images. It encloses robust photogrammetric and computer vision algorithms with the following aims: (i) increase automation, allowing to get dense 3D point clouds through a friendly and easy-to-use interface; (ii) increase flexibility, working with any type of images, scenarios and cameras; (iii) improve quality, guaranteeing high accuracy and resolution; (iv) preserve photogrammetric reliability and repeatability. Last but not least, GRAPHOS has also an educational component reinforced with some didactical explanations about algorithms and their performance. The developments were carried out at different levels: GUI realization, image pre-processing, photogrammetric processing with weight parameters, dataset creation and system evaluation. The paper will present in detail the developments of GRAPHOS with all its photogrammetric components and the evaluation analyses based on various image datasets. GRAPHOS is distributed for free for research and educational needs.

  4. An Improved InSAR Image Co-Registration Method for Pairs with Relatively Big Distortions or Large Incoherent Areas

    PubMed Central

    Chen, Zhenwei; Zhang, Lei; Zhang, Guo

    2016-01-01

    Co-registration is one of the most important steps in interferometric synthetic aperture radar (InSAR) data processing. The standard offset-measurement method based on cross-correlating uniformly distributed patches takes no account of specific geometric transformation between images or characteristics of ground scatterers. Hence, it is inefficient and difficult to obtain satisfying co-registration results for image pairs with relatively big distortion or large incoherent areas. Given this, an improved co-registration strategy is proposed in this paper which takes both the geometric features and image content into consideration. Firstly, some geometric transformations including scale, flip, rotation, and shear between images were eliminated based on the geometrical information, and the initial co-registration polynomial was obtained. Then the registration points were automatically detected by integrating the signal-to-clutter-ratio (SCR) thresholds and the amplitude information, and a further co-registration process was performed to refine the polynomial. Several comparison experiments were carried out using 2 TerraSAR-X data from the Hong Kong airport and 21 PALSAR data from the Donghai Bridge. Experiment results demonstrate that the proposed method brings accuracy and efficiency improvements for co-registration and processing abilities in the cases of big distortion between images or large incoherent areas in the images. For most co-registrations, the proposed method can enhance the reliability and applicability of co-registration and thus promote the automation to a higher level. PMID:27649207

  5. An Improved InSAR Image Co-Registration Method for Pairs with Relatively Big Distortions or Large Incoherent Areas.

    PubMed

    Chen, Zhenwei; Zhang, Lei; Zhang, Guo

    2016-09-17

    Co-registration is one of the most important steps in interferometric synthetic aperture radar (InSAR) data processing. The standard offset-measurement method based on cross-correlating uniformly distributed patches takes no account of specific geometric transformation between images or characteristics of ground scatterers. Hence, it is inefficient and difficult to obtain satisfying co-registration results for image pairs with relatively big distortion or large incoherent areas. Given this, an improved co-registration strategy is proposed in this paper which takes both the geometric features and image content into consideration. Firstly, some geometric transformations including scale, flip, rotation, and shear between images were eliminated based on the geometrical information, and the initial co-registration polynomial was obtained. Then the registration points were automatically detected by integrating the signal-to-clutter-ratio (SCR) thresholds and the amplitude information, and a further co-registration process was performed to refine the polynomial. Several comparison experiments were carried out using 2 TerraSAR-X data from the Hong Kong airport and 21 PALSAR data from the Donghai Bridge. Experiment results demonstrate that the proposed method brings accuracy and efficiency improvements for co-registration and processing abilities in the cases of big distortion between images or large incoherent areas in the images. For most co-registrations, the proposed method can enhance the reliability and applicability of co-registration and thus promote the automation to a higher level.

  6. Technique adaptation, strategic replanning, and team learning during implementation of MR-guided brachytherapy for cervical cancer.

    PubMed

    Skliarenko, Julia; Carlone, Marco; Tanderup, Kari; Han, Kathy; Beiki-Ardakani, Akbar; Borg, Jette; Chan, Kitty; Croke, Jennifer; Rink, Alexandra; Simeonov, Anna; Ujaimi, Reem; Xie, Jason; Fyles, Anthony; Milosevic, Michael

    MR-guided brachytherapy (MRgBT) with interstitial needles is associated with improved outcomes in cervical cancer patients. However, there are implementation barriers, including magnetic resonance (MR) access, practitioner familiarity/comfort, and efficiency. This study explores a graded MRgBT implementation strategy that included the adaptive use of needles, strategic use of MR imaging/planning, and team learning. Twenty patients with cervical cancer were treated with high-dose-rate MRgBT (28 Gy in four fractions, two insertions, daily MR imaging/planning). A tandem/ring applicator alone was used for the first insertion in most patients. Needles were added for the second insertion based on evaluation of the initial dosimetry. An interdisciplinary expert team reviewed and discussed the MR images and treatment plans. Dosimetry-trigger technique adaptation with the addition of needles for the second insertion improved target coverage in all patients with suboptimal dosimetry initially without compromising organ-at-risk (OAR) sparing. Target and OAR planning objectives were achieved in most patients. There were small or no systematic differences in tumor or OAR dosimetry between imaging/planning once per insertion vs. daily and only small random variations. Peer review and discussion of images, contours, and plans promoted learning and process development. Technique adaptation based on the initial dosimetry is an efficient approach to implementing MRgBT while gaining comfort with the use of needles. MR imaging and planning once per insertion is safe in most patients as long as applicator shifts, and large anatomical changes are excluded. Team learning is essential to building individual and programmatic competencies. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  7. Current Status of Single Particle Imaging with X-ray Lasers

    DOE PAGES

    Sun, Zhibin; Fan, Jiadong; Li, Haoyuan; ...

    2018-01-22

    The advent of ultrafast X-ray free-electron lasers (XFELs) opens the tantalizing possibility of the atomic-resolution imaging of reproducible objects such as viruses, nanoparticles, single molecules, clusters, and perhaps biological cells, achieving a resolution for single particle imaging better than a few tens of nanometers. Improving upon this is a significant challenge which has been the focus of a global single particle imaging (SPI) initiative launched in December 2014 at the Linac Coherent Light Source (LCLS), SLAC National Accelerator Laboratory, USA. A roadmap was outlined, and significant multi-disciplinary effort has since been devoted to work on the technical challenges of SPImore » such as radiation damage, beam characterization, beamline instrumentation and optics, sample preparation and delivery and algorithm development at multiple institutions involved in the SPI initiative. Currently, the SPI initiative has achieved 3D imaging of rice dwarf virus (RDV) and coliphage PR772 viruses at ~10 nm resolution by using soft X-ray FEL pulses at the Atomic Molecular and Optical (AMO) instrument of LCLS. Meanwhile, diffraction patterns with signal above noise up to the corner of the detector with a resolution of ~6 Ångström (Å) were also recorded with hard X-rays at the Coherent X-ray Imaging (CXI) instrument, also at LCLS. Achieving atomic resolution is truly a grand challenge and there is still a long way to go in light of recent developments in electron microscopy. However, the potential for studying dynamics at physiological conditions and capturing ultrafast biological, chemical and physical processes represents a tremendous potential application, attracting continued interest in pursuing further method development. In this paper, we give a brief introduction of SPI developments and look ahead to further method development.« less

  8. Current Status of Single Particle Imaging with X-ray Lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Zhibin; Fan, Jiadong; Li, Haoyuan

    The advent of ultrafast X-ray free-electron lasers (XFELs) opens the tantalizing possibility of the atomic-resolution imaging of reproducible objects such as viruses, nanoparticles, single molecules, clusters, and perhaps biological cells, achieving a resolution for single particle imaging better than a few tens of nanometers. Improving upon this is a significant challenge which has been the focus of a global single particle imaging (SPI) initiative launched in December 2014 at the Linac Coherent Light Source (LCLS), SLAC National Accelerator Laboratory, USA. A roadmap was outlined, and significant multi-disciplinary effort has since been devoted to work on the technical challenges of SPImore » such as radiation damage, beam characterization, beamline instrumentation and optics, sample preparation and delivery and algorithm development at multiple institutions involved in the SPI initiative. Currently, the SPI initiative has achieved 3D imaging of rice dwarf virus (RDV) and coliphage PR772 viruses at ~10 nm resolution by using soft X-ray FEL pulses at the Atomic Molecular and Optical (AMO) instrument of LCLS. Meanwhile, diffraction patterns with signal above noise up to the corner of the detector with a resolution of ~6 Ångström (Å) were also recorded with hard X-rays at the Coherent X-ray Imaging (CXI) instrument, also at LCLS. Achieving atomic resolution is truly a grand challenge and there is still a long way to go in light of recent developments in electron microscopy. However, the potential for studying dynamics at physiological conditions and capturing ultrafast biological, chemical and physical processes represents a tremendous potential application, attracting continued interest in pursuing further method development. In this paper, we give a brief introduction of SPI developments and look ahead to further method development.« less

  9. Brain Imaging in Alzheimer Disease

    PubMed Central

    Johnson, Keith A.; Fox, Nick C.; Sperling, Reisa A.; Klunk, William E.

    2012-01-01

    Imaging has played a variety of roles in the study of Alzheimer disease (AD) over the past four decades. Initially, computed tomography (CT) and then magnetic resonance imaging (MRI) were used diagnostically to rule out other causes of dementia. More recently, a variety of imaging modalities including structural and functional MRI and positron emission tomography (PET) studies of cerebral metabolism with fluoro-deoxy-d-glucose (FDG) and amyloid tracers such as Pittsburgh Compound-B (PiB) have shown characteristic changes in the brains of patients with AD, and in prodromal and even presymptomatic states that can help rule-in the AD pathophysiological process. No one imaging modality can serve all purposes as each have unique strengths and weaknesses. These modalities and their particular utilities are discussed in this article. The challenge for the future will be to combine imaging biomarkers to most efficiently facilitate diagnosis, disease staging, and, most importantly, development of effective disease-modifying therapies. PMID:22474610

  10. Evaluation of breast tissue with confocal strip-mosaicking microscopy: a test approach emulating pathology-like examination

    PubMed Central

    Abeytunge, Sanjee; Larson, Bjorg; Peterson, Gary; Morrow, Monica; Rajadhyaksha, Milind

    2017-01-01

    Abstract. Confocal microscopy is an emerging technology for rapid imaging of freshly excised tissue without the need for frozen- or fixed-section processing. Initial studies have described imaging of breast tissue using fluorescence confocal microscopy with small regions of interest, typically 750×750  μm2. We present exploration with a microscope, termed confocal strip-mosaicking microscope (CSM microscope), which images an area of 2×2  cm2 of tissue with cellular-level resolution in 10 min of excision. Using the CSM microscope, we imaged 34 fresh, human, large breast tissue specimens from 18 patients, blindly analyzed by a board-certified pathologist and subsequently correlated with the corresponding standard fixed histopathology. Invasive tumors and benign tissue were clearly identified in CSM strip-mosaic images. Thirty specimens were concordant for image-to-histopathology correlation while four were discordant. PMID:28327961

  11. A Methodology for Anatomic Ultrasound Image Diagnostic Quality Assessment.

    PubMed

    Hemmsen, Martin Christian; Lange, Theis; Brandt, Andreas Hjelm; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-01-01

    This paper discusses the methods for the assessment of ultrasound image quality based on our experiences with evaluating new methods for anatomic imaging. It presents a methodology to ensure a fair assessment between competing imaging methods using clinically relevant evaluations. The methodology is valuable in the continuing process of method optimization and guided development of new imaging methods. It includes a three phased study plan covering from initial prototype development to clinical assessment. Recommendations to the clinical assessment protocol, software, and statistical analysis are presented. Earlier uses of the methodology has shown that it ensures validity of the assessment, as it separates the influences between developer, investigator, and assessor once a research protocol has been established. This separation reduces confounding influences on the result from the developer to properly reveal the clinical value. This paper exemplifies the methodology using recent studies of synthetic aperture sequential beamforming tissue harmonic imaging.

  12. A feasibility study of X-ray phase-contrast mammographic tomography at the Imaging and Medical beamline of the Australian Synchrotron.

    PubMed

    Nesterets, Yakov I; Gureyev, Timur E; Mayo, Sheridan C; Stevenson, Andrew W; Thompson, Darren; Brown, Jeremy M C; Kitchen, Marcus J; Pavlov, Konstantin M; Lockie, Darren; Brun, Francesco; Tromba, Giuliana

    2015-11-01

    Results are presented of a recent experiment at the Imaging and Medical beamline of the Australian Synchrotron intended to contribute to the implementation of low-dose high-sensitivity three-dimensional mammographic phase-contrast imaging, initially at synchrotrons and subsequently in hospitals and medical imaging clinics. The effect of such imaging parameters as X-ray energy, source size, detector resolution, sample-to-detector distance, scanning and data processing strategies in the case of propagation-based phase-contrast computed tomography (CT) have been tested, quantified, evaluated and optimized using a plastic phantom simulating relevant breast-tissue characteristics. Analysis of the data collected using a Hamamatsu CMOS Flat Panel Sensor, with a pixel size of 100 µm, revealed the presence of propagation-based phase contrast and demonstrated significant improvement of the quality of phase-contrast CT imaging compared with conventional (absorption-based) CT, at medically acceptable radiation doses.

  13. A CMOS image sensor with stacked photodiodes for lensless observation system of digital enzyme-linked immunosorbent assay

    NASA Astrophysics Data System (ADS)

    Takehara, Hironari; Miyazawa, Kazuya; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Kim, Soo Hyeon; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun

    2014-01-01

    A CMOS image sensor with stacked photodiodes was fabricated using 0.18 µm mixed signal CMOS process technology. Two photodiodes were stacked at the same position of each pixel of the CMOS image sensor. The stacked photodiodes consist of shallow high-concentration N-type layer (N+), P-type well (PW), deep N-type well (DNW), and P-type substrate (P-sub). PW and P-sub were shorted to ground. By monitoring the voltage of N+ and DNW individually, we can observe two monochromatic colors simultaneously without using any color filters. The CMOS image sensor is suitable for fluorescence imaging, especially contact imaging such as a lensless observation system of digital enzyme-linked immunosorbent assay (ELISA). Since the fluorescence increases with time in digital ELISA, it is possible to observe fluorescence accurately by calculating the difference from the initial relation between the pixel values for both photodiodes.

  14. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment.

    PubMed

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  15. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  16. Effects of a proposed quality improvement process in the proportion of the reported ultrasound findings unsupported by stored images.

    PubMed

    Schenone, Mauro; Ziebarth, Sarah; Duncan, Jose; Stokes, Lea; Hernandez, Angela

    2018-02-05

    To investigate the proportion of documented ultrasound findings that were unsupported by stored ultrasound images in the obstetric ultrasound unit, before and after the implementation of a quality improvement process consisting of a checklist and feedback. A quality improvement process was created involving utilization of a checklist and feedback from physician to sonographer. The feedback was based on findings of the physician's review of the report and images using a check list. To assess the impact of this process, two groups were compared. Group 1 consisted of 58 ultrasound reports created prior to initiation of the process. Group 2 included 65 ultrasound reports created after process implementation. Each chart was reviewed by a physician and a sonographer. Findings considered unsupported by stored images by both reviewers were used for analysis, and the proportion of unsupported findings was compared between the two groups. Results are expressed as mean ± standard error. A p value of < .05 was used to determine statistical significance. Univariate analysis of baseline characteristics and potential confounders showed no statistically significant difference between the groups. The mean proportion of unsupported findings in Group 1 was 5.1 ± 0.87, with Group 2 having a significantly lower proportion (2.6 ± 0.62) (p value = .018). Results suggest a significant decrease in the proportion of unsupported findings in ultrasound reports after quality improvement process implementation. Thus, we present a simple yet effective quality improvement process to reduce unsupported ultrasound findings.

  17. Deaf College Students' Representation of Image and Verbal Information.

    ERIC Educational Resources Information Center

    Epstein, Kenneth; And Others

    This paper discusses the results of a study of 27 college students with deafness that investigated whether cognitive processes are modality dependent in individuals with deafness. The experiment included two separate parts, one composed of shape trials and the other composed of word trials. An initial stimulus was shown on a computer screen for…

  18. Initial On-Orbit Spatial Resolution Characterization of OrbView-3 Panchromatic Images

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir

    2006-01-01

    Characterization was conducted under the Memorandum of Understanding among Orbital Sciences Corp., ORBIMAGE, Inc., and NASA Applied Sciences Directorate. Acquired five OrbView-3 panchromatic images of the permanent Stennis Space Center edge targets painted on a concrete surface. Each image is available at two processing levels: Georaw and Basic. Georaw is an intermediate image in which individual pixels are aligned by a nominal shift in the along-scan direction to adjust for the staggered layout of the panchromatic detectors along the focal plane array. Georaw images are engineering data and are not delivered to customers. The Basic product includes a cubic interpolation to align the pixels better along the focal plane and to correct for sensor artifacts, such as smile and attitude smoothing. This product retains satellite geometry - no rectification is performed. Processing of the characterized images did not include image sharpening, which is applied by default to OrbView-3 image products delivered by ORBIMAGE to customers. Edge responses were extracted from images of tilted edges in two directions: along-scan and cross-scan. Each edge response was approximated with a superposition of three sigmoidal functions through a nonlinear least-squares curve-fitting. Line Spread Functions (LSF) were derived by differentiation of the analytical approximation. Modulation Transfer Functions (MTF) were obtained after applying the discrete Fourier transform to the LSF.

  19. Integrated segmentation of cellular structures

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo

    2011-03-01

    Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.

  20. RVC-CAL library for endmember and abundance estimation in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Lazcano López, R.; Madroñal Quintín, D.; Juárez Martínez, E.; Sanz Álvaro, C.

    2015-10-01

    Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for instance, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. Thus, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization. In that line, this paper describes the construction of a new hyperspectral processing library for RVC-CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This paper presents the development of the required library functions to implement two of the four stages of the hyperspectral imaging processing chain--endmember and abundances estimation. The results obtained show that the library achieves speedups of 30%, approximately, comparing to an existing software of hyperspectral images analysis; concretely, the endmember estimation step reaches an average speedup of 27.6%, which saves almost 8 seconds in the execution time. It also shows the existence of some bottlenecks, as the communication interfaces among the different actors due to the volume of data to transfer. Finally, it is shown that the library considerably simplifies the implementation process. Thus, experimental results show the potential of a RVC-CAL library for analyzing hyperspectral images in real-time, as it provides enough resources to study the system performance.

  1. Utility of shallow-water ATRIS images in defining biogeologic processes and self-similarity in skeletal scleractinia, Florida reefs

    USGS Publications Warehouse

    Lidz, B.H.; Brock, J.C.; Nagle, D.B.

    2008-01-01

    A recently developed remote-sensing instrument acquires high-quality digital photographs in shallow-marine settings within water depths of 15 m. The technology, known as the Along-Track Reef-Imaging System, provides remarkably clear, georeferenced imagery that allows visual interpretation of benthic class (substrates, organisms) for mapping coral reef habitats, as intended. Unforeseen, however, are functions new to the initial technologic purpose: interpr??table evidence for real-time biogeologic processes and for perception of scaled-up skeletal self-similarity of scleractinian microstructure. Florida reef sea trials lacked the grid structure required to map contiguous habitat and submarine topography. Thus, only general observations could be made relative to times and sites of imagery. Degradation of corals was nearly universal; absence of reef fish was profound. However, ???1% of more than 23,600 sea-trial images examined provided visual evidence for local environs and processes. Clarity in many images was so exceptional that small tracks left by organisms traversing fine-grained carbonate sand were visible. Other images revealed a compelling sense, not yet fully understood, of the microscopic wall structure characteristic of scleractinian corals. Conclusions drawn from classifiable images are that demersal marine animals, where imaged, are oblivious to the equipment and that the technology has strong capabilities beyond mapping habitat. Imagery acquired along predetermined transects that cross a variety of geomorphic features within depth limits will ( 1) facilitate construction of accurate contour maps of habitat and bathymetry without need for ground-truthing, (2) contain a strong geologic component of interpreted real-time processes as they relate to imaged topography and regional geomorphology, and (3) allow cost-effective monitoring of regional- and local-scale changes in an ecosystem by use of existing-image global-positioning system coordinates to re-image areas. Details revealed in the modern setting have taphonomic implications for what is often found in the geologic record.

  2. A morphing-based scheme for large deformation analysis with stereo-DIC

    NASA Astrophysics Data System (ADS)

    Genovese, Katia; Sorgente, Donato

    2018-05-01

    A key step in the DIC-based image registration process is the definition of the initial guess for the non-linear optimization routine aimed at finding the parameters describing the pixel subset transformation. This initialization may result very challenging and possibly fail when dealing with pairs of largely deformed images such those obtained from two angled-views of not-flat objects or from the temporal undersampling of rapidly evolving phenomena. To address this problem, we developed a procedure that generates a sequence of intermediate synthetic images for gradually tracking the pixel subset transformation between the two extreme configurations. To this scope, a proper image warping function is defined over the entire image domain through the adoption of a robust feature-based algorithm followed by a NURBS-based interpolation scheme. This allows a fast and reliable estimation of the initial guess of the deformation parameters for the subsequent refinement stage of the DIC analysis. The proposed method is described step-by-step by illustrating the measurement of the large and heterogeneous deformation of a circular silicone membrane undergoing axisymmetric indentation. A comparative analysis of the results is carried out by taking as a benchmark a standard reference-updating approach. Finally, the morphing scheme is extended to the most general case of the correspondence search between two largely deformed textured 3D geometries. The feasibility of this latter approach is demonstrated on a very challenging case: the full-surface measurement of the severe deformation (> 150% strain) suffered by an aluminum sheet blank subjected to a pneumatic bulge test.

  3. Overview of Athena Microscopic Imager Results

    NASA Technical Reports Server (NTRS)

    Herkenhoff, K.; Squyres, S.; Arvidson, R.; Bass, D.; Bell, J., III; Bertelsen, P.; Cabrol, N.; Ehlmann, B.; Farrand, W.; Gaddis, L.

    2005-01-01

    The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on an extendable arm, the Instrument Deployment Device (IDD). The MI acquires images at a spatial resolution of 31 microns/pixel over a broad spectral range (400 - 700 nm). The MI uses the same electronics design as the other MER cameras but its optics yield a field of view of 32 32 mm across a 1024 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. The MI science objectives, instrument design and calibration, operation, and data processing were described by Herkenhoff et al. Initial results of the MI experiment on both MER rovers (Spirit and Opportunity) have been published previously. Highlights of these and more recent results are described.

  4. Paradigms of perception in clinical practice.

    PubMed

    Jacobson, Francine L; Berlanstein, Bruce P; Andriole, Katherine P

    2006-06-01

    Display strategies for medical images in radiology have evolved in tandem with the technology by which images are made. The close of the 20th century, nearly coincident with the 100th anniversary of the discovery of x-rays, brought radiologists to a new crossroad in the evolution of image display. The increasing availability, speed, and flexibility of computer technology can now revolutionize how images are viewed and interpreted. Radiologists are not yet in agreement regarding the next paradigm for image display. The possibilities are being explored systematically through the Society for Computer Applications in Radiology's Transforming the Radiological Interpretation Process initiative. The varied input of radiologists who work in a large variety of settings will enable new display strategies to best serve radiologists in the detection and quantification of disease. Considerations and possibilities for the future are presented in this paper.

  5. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  6. Strip mosaicing confocal microscopy for rapid imaging over large areas of excised tissue

    NASA Astrophysics Data System (ADS)

    Abeytunge, Sanjee; Li, Yongbiao; Larson, Bjorg; Peterson, Gary; Toledo-Crow, Ricardo; Rajadhyaksha, Milind

    2012-03-01

    Confocal mosaicing microscopy is a developing technology platform for imaging tumor margins directly in fresh tissue, without the processing that is required for conventional pathology. Previously, basal cell carcinoma margins were detected by mosaicing of confocal images of 12 x 12 mm2 of excised tissue from Mohs surgery. This mosaicing took 9 minutes. Recently we reported the initial feasibility of a faster approach called "strip mosaicing" on 10 x 10 mm2 of tissue that was demonstrated in 3 minutes. In this paper we report further advances in instrumentation and software. Rapid mosaicing of confocal images on large areas of fresh tissue potentially offers a means to perform pathology at the bedside. Thus, strip mosaicing confocal microscopy may serve as an adjunct to pathology for imaging tumor margins to guide surgery.

  7. Automatic pelvis segmentation from x-ray images of a mouse model

    NASA Astrophysics Data System (ADS)

    Al Okashi, Omar M.; Du, Hongbo; Al-Assam, Hisham

    2017-05-01

    The automatic detection and quantification of skeletal structures has a variety of different applications for biological research. Accurate segmentation of the pelvis from X-ray images of mice in a high-throughput project such as the Mouse Genomes Project not only saves time and cost but also helps achieving an unbiased quantitative analysis within the phenotyping pipeline. This paper proposes an automatic solution for pelvis segmentation based on structural and orientation properties of the pelvis in X-ray images. The solution consists of three stages including pre-processing image to extract pelvis area, initial pelvis mask preparation and final pelvis segmentation. Experimental results on a set of 100 X-ray images showed consistent performance of the algorithm. The automated solution overcomes the weaknesses of a manual annotation procedure where intra- and inter-observer variations cannot be avoided.

  8. Initial evaluation of discrete orthogonal basis reconstruction of ECT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, E.B.; Donohue, K.D.

    1996-12-31

    Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less

  9. Design and development of linked data from the National Map

    USGS Publications Warehouse

    Usery, E. Lynn; Varanka, Dalia E.

    2012-01-01

    The development of linked data on the World-Wide Web provides the opportunity for the U.S. Geological Survey (USGS) to supply its extensive volumes of geospatial data, information, and knowledge in a machine interpretable form and reach users and applications that heretofore have been unavailable. To pilot a process to take advantage of this opportunity, the USGS is developing an ontology for The National Map and converting selected data from nine research test areas to a Semantic Web format to support machine processing and linked data access. In a case study, the USGS has developed initial methods for legacy vector and raster formatted geometry, attributes, and spatial relationships to be accessed in a linked data environment maintaining the capability to generate graphic or image output from semantic queries. The description of an initial USGS approach to developing ontology, linked data, and initial query capability from The National Map databases is presented.

  10. Toward image phylogeny forests: automatically recovering semantically similar image relationships.

    PubMed

    Dias, Zanoni; Goldenstein, Siome; Rocha, Anderson

    2013-09-10

    In the past few years, several near-duplicate detection methods appeared in the literature to identify the cohabiting versions of a given document online. Following this trend, there are some initial attempts to go beyond the detection task, and look into the structure of evolution within a set of related images overtime. In this paper, we aim at automatically identify the structure of relationships underlying the images, correctly reconstruct their past history and ancestry information, and group them in distinct trees of processing history. We introduce a new algorithm that automatically handles sets of images comprising different related images, and outputs the phylogeny trees (also known as a forest) associated with them. Image phylogeny algorithms have many applications such as finding the first image within a set posted online (useful for tracking copyright infringement perpetrators), hint at child pornography content creators, and narrowing down a list of suspects for online harassment using photographs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Contour-Driven Atlas-Based Segmentation

    PubMed Central

    Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina

    2016-01-01

    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202

  12. DREAMS and IMAGE: A Model and Computer Implementation for Concurrent, Life-Cycle Design of Complex Systems

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.

  13. a method of gravity and seismic sequential inversion and its GPU implementation

    NASA Astrophysics Data System (ADS)

    Liu, G.; Meng, X.

    2011-12-01

    In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing, we use the GPU to accelerate our gravity and seismic inversion. Taking the gravity inversion as an example, its kernels are gravity forward simulation and correlation imaging, after the parallelization in GPU, in 3D case,the inversion module, the original five CPU loops are reduced to three,the forward module the original five CPU loops are reduced to two. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).

  14. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    NASA Astrophysics Data System (ADS)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  15. NASA sea ice and snow validation plan for the Defense Meteorological Satellite Program special sensor microwave/imager

    NASA Technical Reports Server (NTRS)

    Cavalieri, Donald J. (Editor); Swift, Calvin T. (Editor)

    1987-01-01

    This document addresses the task of developing and executing a plan for validating the algorithm used for initial processing of sea ice data from the Special Sensor Microwave/Imager (SSMI). The document outlines a plan for monitoring the performance of the SSMI, for validating the derived sea ice parameters, and for providing quality data products before distribution to the research community. Because of recent advances in the application of passive microwave remote sensing to snow cover on land, the validation of snow algorithms is also addressed.

  16. Analysis of ERTS imagery using special electronic viewing/measuring equipment

    NASA Technical Reports Server (NTRS)

    Evans, W. E.; Serebreny, S. M.

    1973-01-01

    An electronic satellite image analysis console (ESIAC) is being employed to process imagery for use by USGS investigators in several different disciplines studying dynamic hydrologic conditions. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. Quantitative measurements of distances, areas, and brightness profiles can be extracted digitally under operator supervision. Initial results are presented for the display and measurement of snowfield extent, glacier development, sediment plumes from estuary discharge, playa inventory, phreatophyte and other vegetative changes.

  17. Gaia Data Release 1. Pre-processing and source list creation

    NASA Astrophysics Data System (ADS)

    Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.

    2016-11-01

    Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.

  18. International Cognition and Cancer Task Force Recommendations for Neuroimaging Methods in the Study of Cognitive Impairment in Non-CNS Cancer Patients.

    PubMed

    Deprez, Sabine; Kesler, Shelli R; Saykin, Andrew J; Silverman, Daniel H S; de Ruiter, Michiel B; McDonald, Brenna C

    2018-03-01

    Cancer- and treatment-related cognitive changes have been a focus of increasing research since the early 1980s, with meta-analyses demonstrating poorer performance in cancer patients in cognitive domains including executive functions, processing speed, and memory. To facilitate collaborative efforts, in 2011 the International Cognition and Cancer Task Force (ICCTF) published consensus recommendations for core neuropsychological tests for studies of cancer populations. Over the past decade, studies have used neuroimaging techniques, including structural and functional magnetic resonance imaging (fMRI) and positron emission tomography, to examine the underlying brain basis for cancer- and treatment-related cognitive declines. As yet, however, there have been no consensus recommendations to guide researchers new to this field or to promote the ability to combine data sets. We first discuss important methodological issues with regard to neuroimaging study design, scanner considerations, and sequence selection, focusing on concerns relevant to cancer populations. We propose a minimum recommended set of sequences, including a high-resolution T1-weighted volume and a resting state fMRI scan. Additional advanced imaging sequences are discussed for consideration when feasible, including task-based fMRI and diffusion tensor imaging. Important image data processing and analytic considerations are also reviewed. These recommendations are offered to facilitate increased use of neuroimaging in studies of cancer- and treatment-related cognitive dysfunction. They are not intended to discourage investigator-initiated efforts to develop cutting-edge techniques, which will be helpful in advancing the state of the knowledge. Use of common imaging protocols will facilitate multicenter and data-pooling initiatives, which are needed to address critical mechanistic research questions.

  19. The Impact of a Health IT Changeover on Medical Imaging Department Work Processes and Turnaround Times

    PubMed Central

    Georgiou, A.; Lymer, S.; Hordern, A.; Ridley, L.; Westbrook, J.

    2015-01-01

    Summary Objectives To assess the impact of introducing a new Picture Archiving and Communication System (PACS) and Radiology Information System (RIS) on: (i) Medical Imaging work processes; and (ii) turnaround times (TATs) for x-ray and CT scan orders initiated in the Emergency Department (ED). Methods We employed a mixed method study design comprising: (i) semi-structured interviews with Medical Imaging Department staff; and (ii) retrospectively extracted ED data before (March/April 2010) and after (March/April 2011 and 2012) the introduction of a new PACS/RIS. TATs were calculated as: processing TAT (median time from image ordering to examination) and reporting TAT (median time from examination to final report). Results Reporting TAT for x-rays decreased significantly after introduction of the new PACS/RIS; from a median of 76 hours to 38 hours per order (p<.0001) for patients discharged from the ED, and from 84 hours to 35 hours (p<.0001) for patients admitted to hospital. Medical Imaging staff reported that the changeover to the new PACS/RIS led to gains in efficiency, particularly regarding the accessibility of images and patient-related information. Nevertheless, assimilation of the new PACS/RIS with existing Departmental work processes was considered inadequate and in some instances unsafe. Issues highlighted related to the synchronization of work tasks (e.g., porter arrangements) and the material set up of the work place (e.g., the number and location of computers). Conclusions The introduction of new health IT can be a “double-edged sword” providing improved efficiency but at the same time introducing potential hazards affecting the effectiveness of the Medical Imaging Department. PMID:26448790

  20. Rare stress fracture: longitudinal fracture of the femur.

    PubMed

    Pérez González, M; Velázquez Fragua, P; López Miralles, E; Abad Moretón, M M

    42-year-old man with pain in the posterolateral region of the right knee that began while he was running. Initially, it was diagnosed by magnetic resonance (MR) as a possible aggressive process (osteosarcoma or Ewing's sarcoma) but with computed tomography it was noted a cortical hypodense linear longitudinal image with a continuous, homogeneous and solid periosteal reaction without clear soft tissue mass that in this patient suggest a longitudinal distal femoral fatigue stress fracture. This type of fracture at this location is very rare. Stress fractures are entities that can be confused with an agressive process. MR iscurrently the most sensitive and specific imaging method for its diagnosis. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. High Density or Urban Sprawl: What Works Best in Biology?

    PubMed

    Oreopoulos, John; Gray-Owen, Scott D; Yip, Christopher M

    2017-02-28

    With new approaches in imaging-from new tools or reagents to processing algorithms-come unique opportunities and challenges to our understanding of biological processes, structures, and dynamics. Although innovations in super-resolution imaging are affording novel perspectives into how molecules structurally associate and localize in response to, or in order to initiate, specific signaling events in the cell, questions arise as to how to interpret these observations in the context of biological function. Just as each neighborhood in a city has its own unique vibe, culture, and indeed density, recent work has shown that membrane receptor behavior and action is governed by their localization and association state. There is tremendous potential in developing strategies for tracking how the populations of these molecular neighborhoods change dynamically.

  2. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    PubMed Central

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe

    2017-01-01

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788

  3. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    PubMed

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  4. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications

    NASA Astrophysics Data System (ADS)

    Paramanandham, Nirmala; Rajendiran, Kishore

    2018-01-01

    A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.

  5. Surface of the comet 67P from PHILAE/CIVA images as clues to the formation of the comet nucleus

    NASA Astrophysics Data System (ADS)

    Poulet, Francois; Bibring, Jean-Pierre; Carter, John; Eng, Pascal; Gondet, Brigitte; Jorda, Laurent; Langevin, Yves; Le Mouélic, Stéphane; Pilorget, Cédric

    2015-04-01

    The CIVA cameras onboard PHILAE provided the first ever in situ images of the surface of a comet (Bibring et al., this conf). The panorama acquired by CIVA at the landing site reveals a rough terrain dominated by agglomerates of consolidated materials similar to cm-sized pebbles. While the composition of these materials is unknown, their nature will be discussed in relation to both endogenic and exogenic processes that may sculpted the landscape of the landing site. These processes includes erosion (spatially non-uniform) by sublimation, redeposition of particles after ejection, fluidization and transport of cometary material on the surface, sintering effect, thermal fatigue, thermal stress, size segregation due to shaking, eolian erosion due to local outflow of cometary vapor and impact cratering at various scales. Recent advancements in planet formation theory suggest that the initial planetesimals (or cometestimals) may grow directly from the gravitational collapse of aerodynamically concentrated small particles, often referred to as "pebbles" (Johansen et al. 2007, Nature 448, 1022; Cuzzi et al. 2008, AJ 687, 1432). We will then discuss the possibility that the observed pebble pile structures are indicative of the formation process from which the initial nucleus formed, and how we can use this idea to learn about protoplanetary disks and the early processes involved in the Solar System formation.

  6. Image-Guided Intraoperative Cortical Deformation Recovery Using Game Theory: Application to Neocortical Epilepsy Surgery

    PubMed Central

    DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.

    2010-01-01

    During neurosurgery, nonrigid brain deformation prevents preoperatively-acquired images from accurately depicting the intraoperative brain. Stereo vision systems can be used to track intraoperative cortical surface deformation and update preoperative brain images in conjunction with a biomechanical model. However, these stereo systems are often plagued with calibration error, which can corrupt the deformation estimation. In order to decouple the effects of camera calibration from the surface deformation estimation, a framework that can solve for disparate and often competing variables is needed. Game theory, which was developed to handle decision making in this type of competitive environment, has been applied to various fields from economics to biology. In this paper, game theory is applied to cortical surface tracking during neocortical epilepsy surgery and used to infer information about the physical processes of brain surface deformation and image acquisition. The method is successfully applied to eight in vivo cases, resulting in an 81% decrease in mean surface displacement error. This includes a case in which some of the initial camera calibration parameters had errors of 70%. Additionally, the advantages of using a game theoretic approach in neocortical epilepsy surgery are clearly demonstrated in its robustness to initial conditions. PMID:20129844

  7. Chaotic Image Encryption Algorithm Based on Bit Permutation and Dynamic DNA Encoding.

    PubMed

    Zhang, Xuncai; Han, Feng; Niu, Ying

    2017-01-01

    With the help of the fact that chaos is sensitive to initial conditions and pseudorandomness, combined with the spatial configurations in the DNA molecule's inherent and unique information processing ability, a novel image encryption algorithm based on bit permutation and dynamic DNA encoding is proposed here. The algorithm first uses Keccak to calculate the hash value for a given DNA sequence as the initial value of a chaotic map; second, it uses a chaotic sequence to scramble the image pixel locations, and the butterfly network is used to implement the bit permutation. Then, the image is coded into a DNA matrix dynamic, and an algebraic operation is performed with the DNA sequence to realize the substitution of the pixels, which further improves the security of the encryption. Finally, the confusion and diffusion properties of the algorithm are further enhanced by the operation of the DNA sequence and the ciphertext feedback. The results of the experiment and security analysis show that the algorithm not only has a large key space and strong sensitivity to the key but can also effectively resist attack operations such as statistical analysis and exhaustive analysis.

  8. Chaotic Image Encryption Algorithm Based on Bit Permutation and Dynamic DNA Encoding

    PubMed Central

    2017-01-01

    With the help of the fact that chaos is sensitive to initial conditions and pseudorandomness, combined with the spatial configurations in the DNA molecule's inherent and unique information processing ability, a novel image encryption algorithm based on bit permutation and dynamic DNA encoding is proposed here. The algorithm first uses Keccak to calculate the hash value for a given DNA sequence as the initial value of a chaotic map; second, it uses a chaotic sequence to scramble the image pixel locations, and the butterfly network is used to implement the bit permutation. Then, the image is coded into a DNA matrix dynamic, and an algebraic operation is performed with the DNA sequence to realize the substitution of the pixels, which further improves the security of the encryption. Finally, the confusion and diffusion properties of the algorithm are further enhanced by the operation of the DNA sequence and the ciphertext feedback. The results of the experiment and security analysis show that the algorithm not only has a large key space and strong sensitivity to the key but can also effectively resist attack operations such as statistical analysis and exhaustive analysis. PMID:28912802

  9. Poster - Thur Eve - 05: Safety systems and failure modes and effects analysis for a magnetic resonance image guided radiation therapy system.

    PubMed

    Lamey, M; Carlone, M; Alasti, H; Bissonnette, J P; Borg, J; Breen, S; Coolens, C; Heaton, R; Islam, M; van Proojen, M; Sharpe, M; Stanescu, T; Jaffray, D

    2012-07-01

    An online Magnetic Resonance guided Radiation Therapy (MRgRT) system is under development. The system is comprised of an MRI with the capability of travel between and into HDR brachytherapy and external beam radiation therapy vaults. The system will provide on-line MR images immediately prior to radiation therapy. The MR images will be registered to a planning image and used for image guidance. With the intention of system safety we have performed a failure modes and effects analysis. A process tree of the facility function was developed. Using the process tree as well as an initial design of the facility as guidelines possible failure modes were identified, for each of these failure modes root causes were identified. For each possible failure the assignment of severity, detectability and occurrence scores was performed. Finally suggestions were developed to reduce the possibility of an event. The process tree consists of nine main inputs and each of these main inputs consisted of 5 - 10 sub inputs and tertiary inputs were also defined. The process tree ensures that the overall safety of the system has been considered. Several possible failure modes were identified and were relevant to the design, construction, commissioning and operating phases of the facility. The utility of the analysis can be seen in that it has spawned projects prior to installation and has lead to suggestions in the design of the facility. © 2012 American Association of Physicists in Medicine.

  10. Atmospheric Motion Vectors from INSAT-3D: Initial quality assessment and its impact on track forecast of cyclonic storm NANAUK

    NASA Astrophysics Data System (ADS)

    Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham

    2016-03-01

    The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.

  11. Breaking cover: neural responses to slow and fast camouflage-breaking motion.

    PubMed

    Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M; McLoughlin, Niall; Wang, Wei

    2015-08-22

    Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. © 2015 The Authors.

  12. Breaking cover: neural responses to slow and fast camouflage-breaking motion

    PubMed Central

    Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M.; McLoughlin, Niall; Wang, Wei

    2015-01-01

    Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. PMID:26269500

  13. Pulse sequences for uniform perfluorocarbon droplet vaporization and ultrasound imaging.

    PubMed

    Puett, C; Sheeran, P S; Rojas, J D; Dayton, P A

    2014-09-01

    Phase-change contrast agents (PCCAs) consist of liquid perfluorocarbon droplets that can be vaporized into gas-filled microbubbles by pulsed ultrasound waves at diagnostic pressures and frequencies. These activatable contrast agents provide benefits of longer circulating times and smaller sizes relative to conventional microbubble contrast agents. However, optimizing ultrasound-induced activation of these agents requires coordinated pulse sequences not found on current clinical systems, in order to both initiate droplet vaporization and image the resulting microbubble population. Specifically, the activation process must provide a spatially uniform distribution of microbubbles and needs to occur quickly enough to image the vaporized agents before they migrate out of the imaging field of view. The development and evaluation of protocols for PCCA-enhanced ultrasound imaging using a commercial array transducer are described. The developed pulse sequences consist of three states: (1) initial imaging at sub-activation pressures, (2) activating droplets within a selected region of interest, and (3) imaging the resulting microbubbles. Bubble clouds produced by the vaporization of decafluorobutane and octafluoropropane droplets were characterized as a function of focused pulse parameters and acoustic field location. Pulse sequences were designed to manipulate the geometries of discrete microbubble clouds using electronic steering, and cloud spacing was tailored to build a uniform vaporization field. The complete pulse sequence was demonstrated in the water bath and then in vivo in a rodent kidney. The resulting contrast provided a significant increase (>15 dB) in signal intensity. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Stop-Frame Filming and Discovery of Reactions at the Single-Molecule Level by Transmission Electron Microscopy

    PubMed Central

    2017-01-01

    We report an approach, named chemTEM, to follow chemical transformations at the single-molecule level with the electron beam of a transmission electron microscope (TEM) applied as both a tunable source of energy and a sub-angstrom imaging probe. Deposited on graphene, disk-shaped perchlorocoronene molecules are precluded from intermolecular interactions. This allows monomolecular transformations to be studied at the single-molecule level in real time and reveals chlorine elimination and reactive aryne formation as a key initial stage of multistep reactions initiated by the 80 keV e-beam. Under the same conditions, perchlorocoronene confined within a nanotube cavity, where the molecules are situated in very close proximity to each other, enables imaging of intermolecular reactions, starting with the Diels–Alder cycloaddition of a generated aryne, followed by rearrangement of the angular adduct to a planar polyaromatic structure and the formation of a perchlorinated zigzag nanoribbon of graphene as the final product. ChemTEM enables the entire process of polycondensation, including the formation of metastable intermediates, to be captured in a one-shot “movie”. A molecule with a similar size and shape but with a different chemical composition, octathio[8]circulene, under the same conditions undergoes another type of polycondensation via thiyl biradical generation and subsequent reaction leading to polythiophene nanoribbons with irregular edges incorporating bridging sulfur atoms. Graphene or carbon nanotubes supporting the individual molecules during chemTEM studies ensure that the elastic interactions of the molecules with the e-beam are the dominant forces that initiate and drive the reactions we image. Our ab initio DFT calculations explicitly incorporating the e-beam in the theoretical model correlate with the chemTEM observations and give a mechanism for direct control not only of the type of the reaction but also of the reaction rate. Selection of the appropriate e-beam energy and control of the dose rate in chemTEM enabled imaging of reactions on a time frame commensurate with TEM image capture rates, revealing atomistic mechanisms of previously unknown processes. PMID:28191929

  15. Deformable registration of the inflated and deflated lung in cone-beam CT-guided thoracic surgery: Initial investigation of a combined model- and image-driven approach

    PubMed Central

    Uneri, Ali; Nithiananthan, Sajendra; Schafer, Sebastian; Otake, Yoshito; Stayman, J. Webster; Kleinszig, Gerhard; Sussman, Marc S.; Prince, Jerry L.; Siewerdsen, Jeffrey H.

    2013-01-01

    Purpose: Surgical resection is the preferred modality for curative treatment of early stage lung cancer, but localization of small tumors (<10 mm diameter) during surgery presents a major challenge that is likely to increase as more early-stage disease is detected incidentally and in low-dose CT screening. To overcome the difficulty of manual localization (fingers inserted through intercostal ports) and the cost, logistics, and morbidity of preoperative tagging (coil or dye placement under CT-fluoroscopy), the authors propose the use of intraoperative cone-beam CT (CBCT) and deformable image registration to guide targeting of small tumors in video-assisted thoracic surgery (VATS). A novel algorithm is reported for registration of the lung from its inflated state (prior to pleural breach) to the deflated state (during resection) to localize surgical targets and adjacent critical anatomy. Methods: The registration approach geometrically resolves images of the inflated and deflated lung using a coarse model-driven stage followed by a finer image-driven stage. The model-driven stage uses image features derived from the lung surfaces and airways: triangular surface meshes are morphed to capture bulk motion; concurrently, the airways generate graph structures from which corresponding nodes are identified. Interpolation of the sparse motion fields computed from the bounding surface and interior airways provides a 3D motion field that coarsely registers the lung and initializes the subsequent image-driven stage. The image-driven stage employs an intensity-corrected, symmetric form of the Demons method. The algorithm was validated over 12 datasets, obtained from porcine specimen experiments emulating CBCT-guided VATS. Geometric accuracy was quantified in terms of target registration error (TRE) in anatomical targets throughout the lung, and normalized cross-correlation. Variations of the algorithm were investigated to study the behavior of the model- and image-driven stages by modifying individual algorithmic steps and examining the effect in comparison to the nominal process. Results: The combined model- and image-driven registration process demonstrated accuracy consistent with the requirements of minimally invasive VATS in both target localization (∼3–5 mm within the target wedge) and critical structure avoidance (∼1–2 mm). The model-driven stage initialized the registration to within a median TRE of 1.9 mm (95% confidence interval (CI) maximum = 5.0 mm), while the subsequent image-driven stage yielded higher accuracy localization with 0.6 mm median TRE (95% CI maximum = 4.1 mm). The variations assessing the individual algorithmic steps elucidated the role of each step and in some cases identified opportunities for further simplification and improvement in computational speed. Conclusions: The initial studies show the proposed registration method to successfully register CBCT images of the inflated and deflated lung. Accuracy appears sufficient to localize the target and adjacent critical anatomy within ∼1–2 mm and guide localization under conditions in which the target cannot be discerned directly in CBCT (e.g., subtle, nonsolid tumors). The ability to directly localize tumors in the operating room could provide a valuable addition to the VATS arsenal, obviate the cost, logistics, and morbidity of preoperative tagging, and improve patient safety. Future work includes in vivo testing, optimization of workflow, and integration with a CBCT image guidance system. PMID:23298134

  16. Deformable registration of the inflated and deflated lung in cone-beam CT-guided thoracic surgery: initial investigation of a combined model- and image-driven approach.

    PubMed

    Uneri, Ali; Nithiananthan, Sajendra; Schafer, Sebastian; Otake, Yoshito; Stayman, J Webster; Kleinszig, Gerhard; Sussman, Marc S; Prince, Jerry L; Siewerdsen, Jeffrey H

    2013-01-01

    Surgical resection is the preferred modality for curative treatment of early stage lung cancer, but localization of small tumors (<10 mm diameter) during surgery presents a major challenge that is likely to increase as more early-stage disease is detected incidentally and in low-dose CT screening. To overcome the difficulty of manual localization (fingers inserted through intercostal ports) and the cost, logistics, and morbidity of preoperative tagging (coil or dye placement under CT-fluoroscopy), the authors propose the use of intraoperative cone-beam CT (CBCT) and deformable image registration to guide targeting of small tumors in video-assisted thoracic surgery (VATS). A novel algorithm is reported for registration of the lung from its inflated state (prior to pleural breach) to the deflated state (during resection) to localize surgical targets and adjacent critical anatomy. The registration approach geometrically resolves images of the inflated and deflated lung using a coarse model-driven stage followed by a finer image-driven stage. The model-driven stage uses image features derived from the lung surfaces and airways: triangular surface meshes are morphed to capture bulk motion; concurrently, the airways generate graph structures from which corresponding nodes are identified. Interpolation of the sparse motion fields computed from the bounding surface and interior airways provides a 3D motion field that coarsely registers the lung and initializes the subsequent image-driven stage. The image-driven stage employs an intensity-corrected, symmetric form of the Demons method. The algorithm was validated over 12 datasets, obtained from porcine specimen experiments emulating CBCT-guided VATS. Geometric accuracy was quantified in terms of target registration error (TRE) in anatomical targets throughout the lung, and normalized cross-correlation. Variations of the algorithm were investigated to study the behavior of the model- and image-driven stages by modifying individual algorithmic steps and examining the effect in comparison to the nominal process. The combined model- and image-driven registration process demonstrated accuracy consistent with the requirements of minimally invasive VATS in both target localization (∼3-5 mm within the target wedge) and critical structure avoidance (∼1-2 mm). The model-driven stage initialized the registration to within a median TRE of 1.9 mm (95% confidence interval (CI) maximum = 5.0 mm), while the subsequent image-driven stage yielded higher accuracy localization with 0.6 mm median TRE (95% CI maximum = 4.1 mm). The variations assessing the individual algorithmic steps elucidated the role of each step and in some cases identified opportunities for further simplification and improvement in computational speed. The initial studies show the proposed registration method to successfully register CBCT images of the inflated and deflated lung. Accuracy appears sufficient to localize the target and adjacent critical anatomy within ∼1-2 mm and guide localization under conditions in which the target cannot be discerned directly in CBCT (e.g., subtle, nonsolid tumors). The ability to directly localize tumors in the operating room could provide a valuable addition to the VATS arsenal, obviate the cost, logistics, and morbidity of preoperative tagging, and improve patient safety. Future work includes in vivo testing, optimization of workflow, and integration with a CBCT image guidance system.

  17. Constraint-based stereo matching

    NASA Technical Reports Server (NTRS)

    Kuan, D. T.

    1987-01-01

    The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.

  18. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    PubMed

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  19. Enhancing the far-UV sensitivity of silicon CMOS imaging arrays

    NASA Astrophysics Data System (ADS)

    Retherford, K. D.; Bai, Yibin; Ryu, Kevin K.; Gregory, J. A.; Welander, Paul B.; Davis, Michael W.; Greathouse, Thomas K.; Winter, Gregory S.; Suntharalingam, Vyshnavi; Beletic, James W.

    2014-07-01

    We report our progress toward optimizing backside-illuminated silicon PIN CMOS devices developed by Teledyne Imaging Sensors (TIS) for far-UV planetary science applications. This project was motivated by initial measurements at Southwest Research Institute (SwRI) of the far-UV responsivity of backside-illuminated silicon PIN photodiode test structures described in Bai et al., SPIE, 2008, which revealed a promising QE in the 100-200 nm range as reported in Davis et al., SPIE, 2012. Our effort to advance the capabilities of thinned silicon wafers capitalizes on recent innovations in molecular beam epitaxy (MBE) doping processes. Key achievements to date include: 1) Representative silicon test wafers were fabricated by TIS, and set up for MBE processing at MIT Lincoln Laboratory (LL); 2) Preliminary far-UV detector QE simulation runs were completed to aid MBE layer design; 3) Detector fabrication was completed through the pre-MBE step; and 4) Initial testing of the MBE doping process was performed on monitoring wafers, with detailed quality assessments. Early results suggest that potential challenges in optimizing the UV-sensitivity of silicon PIN type CMOS devices, compared with similar UV enhancement methods established for CCDs, have been mitigated through our newly developed methods. We will discuss the potential advantages of our approach and briefly describe future development steps.

  20. Measurement of glucose concentration by image processing of thin film slides

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Heavner, David

    2012-02-01

    Measurement of glucose concentration is important for diagnosis and treatment of diabetes mellitus and other medical conditions. This paper describes a novel image-processing based approach for measuring glucose concentration. A fluid drop (patient sample) is placed on a thin film slide. Glucose, present in the sample, reacts with reagents on the slide to produce a color dye. The color intensity of the dye formed varies with glucose at different concentration levels. Current methods use spectrophotometry to determine the glucose level of the sample. Our proposed algorithm uses an image of the slide, captured at a specific wavelength, to automatically determine glucose concentration. The algorithm consists of two phases: training and testing. Training datasets consist of images at different concentration levels. The dye-occupied image region is first segmented using a Hough based technique and then an intensity based feature is calculated from the segmented region. Subsequently, a mathematical model that describes a relationship between the generated feature values and the given concentrations is obtained. During testing, the dye region of a test slide image is segmented followed by feature extraction. These two initial steps are similar to those done in training. However, in the final step, the algorithm uses the model (feature vs. concentration) obtained from the training and feature generated from test image to predict the unknown concentration. The performance of the image-based analysis was compared with that of a standard glucose analyzer.

  1. Empowering schoolchildren to do astronomical science with images

    NASA Astrophysics Data System (ADS)

    Raeside, L.; Busschots, B.; O'Cinneide, E.; Foy, S.; Keating, J. G.

    2005-06-01

    In 1991 the TIE (Telescopes in Education) Foundation provided schoolchildren with the ability to access professional observatory telescopes remotely. TIE has raised the profile of astronomy and science among schoolchildren. Since the initiation of this facility the TIE Foundation have spread their reach from one telescope in the US to many telescopes and many schools across the globe. The VTIE (Virtual Telescopes in Education) project was launched in 2001 to build on the success of TIE. The VTIE VLE (Virtual Learning Environment) provides a Web portal through which pupils can create a scientific proposal, retrieve astronomical images, and produce a scientific paper summarizing their learning experiences of the VTIE scientific process. Since the completion of the first formative evaluations of VTIE (which involved over 250 schoolchildren) it has been observed that the participating schoolchildren have had difficulty completing and understanding the practical imaging aspects of astronomical science. Our experimental observations have revealed that the imaging tools currently available to astronomers have not ported well to schools. The VTIE imaging tools developed during our research will provide schoolchildren with the ability to store, acquire, manipulate and analyze images within the VTIE VLE. It is hypothesized herein that the provision of exclusively child-centered imaging software components will improve greatly the children's empowerment within the VTIE scientific process. Consequentially the addition of fully integrated child-centered imaging tools will contribute positively to the overall VTIE goal to promote science among schoolchildren.

  2. The start of lightning: Evidence of bidirectional lightning initiation.

    PubMed

    Montanyà, Joan; van der Velde, Oscar; Williams, Earle R

    2015-10-16

    Lightning flashes are known to initiate in regions of strong electric fields inside thunderstorms, between layers of positively and negatively charged precipitation particles. For that reason, lightning inception is typically hidden from sight of camera systems used in research. Other technology such as lightning mapping systems based on radio waves can typically detect only some aspects of the lightning initiation process and subsequent development of positive and negative leaders. We report here a serendipitous recording of bidirectional lightning initiation in virgin air under the cloud base at ~11,000 images per second, and the differences in characteristics of opposite polarity leader sections during the earliest stages of the discharge. This case reveals natural lightning initiation, propagation and a return stroke as in negative cloud-to-ground flashes, upon connection to another lightning channel - without any masking by cloud.

  3. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    NASA Astrophysics Data System (ADS)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  4. Geometric Calibration and Validation of Ultracam Aerial Sensors

    NASA Astrophysics Data System (ADS)

    Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried

    2016-03-01

    We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.

  5. Serial grouping of 2D-image regions with object-based attention in humans

    PubMed Central

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-01-01

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Legge, K; O’Connor, D J; Nguyen, D

    Purpose: To determine prostate motion during SBRT boost treatments with a Rectafix rectal sparing device in place using kV imaging during treatment. Methods: Patients each had three gold fiducial markers inserted into the prostate and received two VMAT boost fractions of 9.5–10 Gy under the PROMETHEUS clinical trial protocol with a Rectafix rectal retractor in place. Two-dimensional kilovoltage images of fiducial markers were acquired continuously during delivery. Three patients were treated on a Varian Clinac iX linear accelerator (6X, 600 MU/min), where kV images were acquired at 5 Hz during treatment. Seven patients were treated on a Varian Truebeam linearmore » accelerator (10XFFF, 2400 MU/min) where kV images were acquired every 3 seconds. Images were processed off-line using the Kilovoltage Intrafraction Monitoring (KIM) software after treatment. KIM determines prostate position in three dimensions from 2D kV projections using a probability density model and a pre-treatment kV arc. The 3D displacement of the prostate was quantified as a function of time throughout each fraction. Results: From all fractions analyzed, it was found that the prostate had moved less than 1 mm in any direction from its initial position 84.6% of the time. The prostate was between 1 and 2 mm from its initial position 14.2% of the time, between 2 and 3 mm of its initial position 0.8% of the time and was greater than 3 mm from its initial position only 0.4% of the time. Conclusion: The amount of prostate motion observed during prostate SBRT boost treatments with a Rectafix device in place was minimal and lower than that observed in non-Rectafix studies. The Rectafix device reduces rectal dose as well as immobilizing the prostate. Kimberley Legge is the recipient of an Australian Postgraduate Award.« less

  7. Action Potentials Initiate in the Axon Initial Segment and Propagate Through Axon Collaterals Reliably in Cerebellar Purkinje Neurons

    PubMed Central

    Foust, Amanda; Popovic, Marko; Zecevic, Dejan; McCormick, David A.

    2010-01-01

    Purkinje neurons are the output cells of the cerebellar cortex and generate spikes in two distinct modes, known as simple and complex spikes. Revealing the point of origin of these action potentials, and how they conduct into local axon collaterals, is important for understanding local and distal neuronal processing and communication. By utilizing a recent improvement in voltage sensitive dye imaging technique that provided exceptional spatial and temporal resolution, we were able to resolve the region of spike initiation as well as follow spike propagation into axon collaterals for each action potential initiated on single trials. All fast action potentials, for both simple and complex spikes, whether occurring spontaneously or in response to a somatic current pulse or synaptic input, initiated in the axon initial segment. At discharge frequencies of less than approximately 250 Hz, spikes propagated faithfully through the axon and axon collaterals, in a saltatory manner. Propagation failures were only observed for very high frequencies or for the spikelets associated with complex spikes. These results demonstrate that the axon initial segment is a critical decision point in Purkinje cell processing and that the properties of axon branch points are adjusted to maintain faithful transmission. PMID:20484631

  8. Computerized follow-up of discrepancies in image interpretation between emergency and radiology departments.

    PubMed

    Siegel, E; Groleau, G; Reiner, B; Stair, T

    1998-08-01

    Radiographs are ordered and interpreted for immediate clinical decisions 24 hours a day by emergency physicians (EP's). The Joint Commission for Accreditation of Health Care Organizations requires that all these images be reviewed by radiologists and that there be some mechanism for quality improvement (QI) for discrepant readings. There must be a log of discrepancies and documentation of follow up activities, but this alone does not guarantee effective Q.I. Radiologists reviewing images from the previous day and night often must guess at the preliminary interpretation of the EP and whether follow up action is necessary. EP's may remain ignorant of the final reading and falsely assume the initial diagnosis and treatment were correct. Some hospitals use a paper system in which the EP writes a preliminary interpretation on the requisition slip, which will be available when the radiologist dictates the final reading. Some hospitals use a classification of discrepancies based on clinical import and urgency, and communicated to the EP on duty at the time of the official reading, but may not communicate discrepancies to the EP's who initial read the images. Our computerized radiology department and picture archiving and communications system have increased technologist and radiologist productivity, and decreased retakes and lost films. There are fewer face-to-face consultants of radiologists and clinicians, but more communication by telephone and electronic annotation of PACS images. We have integrated the QI process for emergency department (ED) images into the PACS, and gained advantages over the traditional discrepancy log. Requisitions including clinical indications are entered into the Hospital Information System and then appear on the PACS along with images on readings. The initial impression, time of review, and the initials of the EP are available to the radiologist dictating the official report. The radiologist decides if there is a discrepancy, and whether it is category I (potentially serious, needs immediate follow-up), category II (moderate risk, follow-up in one day), or category III (low risk, follow-up in several days). During the working day, the radiologist calls immediately for category I discrepancies. Those noted from the evening, night, or weekend before are called to the EP the next morning. All discrepancies with the preliminary interpretation are communicated to the EP and are kept in a computerized log for review by a radiologist at a weekly ED teaching conference. This system has reduced the need for the radiologist to ask or guess what the impression was in the ED the night before. It has reduced the variability in recording of impressions by EP's, in communication back from radiologists, in the clinical] follow-up made, and in the documentation of the whole QI process. This system ensures that EP's receive notification of their discrepant readings, and provides continuing education to all the EP's on interpreting images on their patients.

  9. Initial steps toward the realization of large area arrays of single photon counting pixels based on polycrystalline silicon TFTs

    NASA Astrophysics Data System (ADS)

    Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao; Street, Robert A.; Lu, Jeng Ping

    2014-03-01

    The thin-film semiconductor processing methods that enabled creation of inexpensive liquid crystal displays based on amorphous silicon transistors for cell phones and televisions, as well as desktop, laptop and mobile computers, also facilitated the development of devices that have become ubiquitous in medical x-ray imaging environments. These devices, called active matrix flat-panel imagers (AMFPIs), measure the integrated signal generated by incident X rays and offer detection areas as large as ~43×43 cm2. In recent years, there has been growing interest in medical x-ray imagers that record information from X ray photons on an individual basis. However, such photon counting devices have generally been based on crystalline silicon, a material not inherently suited to the cost-effective manufacture of monolithic devices of a size comparable to that of AMFPIs. Motivated by these considerations, we have developed an initial set of small area prototype arrays using thin-film processing methods and polycrystalline silicon transistors. These prototypes were developed in the spirit of exploring the possibility of creating large area arrays offering single photon counting capabilities and, to our knowledge, are the first photon counting arrays fabricated using thin film techniques. In this paper, the architecture of the prototype pixels is presented and considerations that influenced the design of the pixel circuits, including amplifier noise, TFT performance variations, and minimum feature size, are discussed.

  10. Time reversal for photoacoustic tomography based on the wave equation of Nachman, Smith, and Waag

    NASA Astrophysics Data System (ADS)

    Kowar, Richard

    2014-02-01

    One goal of photoacoustic tomography (PAT) is to estimate an initial pressure function φ from pressure data measured at a boundary surrounding the object of interest. This paper is concerned with a time reversal method for PAT that is based on the dissipative wave equation of Nachman, Smith, and Waag [J. Acoust. Soc. Am. 88, 1584 (1990), 10.1121/1.400317]. This equation is a correction of the thermoviscous wave equation such that its solution has a finite wave front speed and, in contrast, it can model several relaxation processes. In this sense, it is more accurate than the thermoviscous wave equation. For simplicity, we focus on the case of one relaxation process. We derive an exact formula for the time reversal image I, which depends on the relaxation time τ1 and the compressibility κ1 of the dissipative medium, and show I (τ1,κ1)→φ for κ1→0. This implies that I =φ holds in the dissipation-free case and that I is similar to φ for sufficiently small compressibility κ1. Moreover, we show for tissue similar to water that the small wave number approximation I0 of the time reversal image satisfies I0=η0*xφ with accent="true">η̂0(|k|)≈const. for |k|≪1/c0τ1, where φ denotes the initial pressure function. For such tissue, our theoretical analysis and numerical simulations show that the time reversal image I is very similar to the initial pressure function φ and that a resolution of σ ≈0.036mm is feasible (for exact measurement data).

  11. Image processing for stripper harvested cotton trash content measurement a progress report

    USDA-ARS?s Scientific Manuscript database

    This study was initiated to provide the basis for obtaining on-line information as to the levels of the various types of gin trash. The objective is to provide the ginner with knowledge of the quantity of the various trash components in the raw uncleaned seed cotton. This information is currently no...

  12. Landsat continuity: issues and opportunities for land cover monitoring

    Treesearch

    Michael A. Wulder; Joanne C. White; Samuel N. Goward; Jeffrey G. Masek; James R. Irons; Martin Herold; Warren B. Cohen; Thomas R. Loveland; Curtis E. Woodcock

    2008-01-01

    Initiated in 1972, the Landsat program has provided a continuous record of Earth observation for 35 years. The assemblage of Landsat spatial, spectral, and temporal resolutions, over a reasonably sized image extent, results in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is absolutely unique and...

  13. Strength and coherence of binocular rivalry depends on shared stimulus complexity.

    PubMed

    Alais, David; Melcher, David

    2007-01-01

    Presenting incompatible images to the eyes results in alternations of conscious perception, a phenomenon known as binocular rivalry. We examined rivalry using either simple stimuli (oriented gratings) or coherent visual objects (faces, houses etc). Two rivalry characteristics were measured: Depth of rivalry suppression and coherence of alternations. Rivalry between coherent visual objects exhibits deep suppression and coherent rivalry, whereas rivalry between gratings exhibits shallow suppression and piecemeal rivalry. Interestingly, rivalry between a simple and a complex stimulus displays the same characteristics (shallow and piecemeal) as rivalry between two simple stimuli. Thus, complex stimuli fail to rival globally unless the fellow stimulus is also global. We also conducted a face adaptation experiment. Adaptation to rivaling faces improved subsequent face discrimination (as expected), but adaptation to a rivaling face/grating pair did not. To explain this, we suggest rivalry must be an early and local process (at least initially), instigated by the failure of binocular fusion, which can then become globally organized by feedback from higher-level areas when both rivalry stimuli are global, so that rivalry tends to oscillate coherently. These globally assembled images then flow through object processing areas, with the dominant image gaining in relative strength in a form of 'biased competition', therefore accounting for the deeper suppression of global images. In contrast, when only one eye receives a global image, local piecemeal suppression from the fellow eye overrides the organizing effects of global feedback to prevent coherent image formation. This indicates the primacy of local over global processes in rivalry.

  14. Enhanced visualization of inner ear structures

    NASA Astrophysics Data System (ADS)

    Niemczyk, Kazimierz; Kucharski, Tomasz; Kujawinska, Malgorzata; Bruzgielewicz, Antoni

    2004-07-01

    Recently surgery requires extensive support from imaging technologies in order to increase effectiveness and safety of operations. One of important tasks is to enhance visualisation of quasi-phase (transparent) 3d structures. Those structures are characterized by very low contrast. It makes differentiation of tissues in field of view very difficult. For that reason the surgeon may be extremly uncertain during operation. This problem is connected with supporting operations of inner ear during which physician has to perform cuts at specific places of quasi-transparent velums. Conventionally during such operations medical doctor views the operating field through stereoscopic microscope. In the paper we propose a 3D visualisation system based on Helmet Mounted Display. Two CCD cameras placed at the output of microscope perform acquisition of stereo pairs of images. The images are processed in real-time with the goal of enhancement of quasi-phased structures. The main task is to create algorithm that is not sensitive to changes in intensity distribution. The disadvantages of existing algorithms is their lack of adaptation to occuring reflexes and shadows in field of view. The processed images from both left and right channels are overlaid on the actual images exported and displayed at LCD's of Helmet Mounted Display. A physician observes by HMD (Helmet Mounted Display) a stereoscopic operating scene with indication of the places of special interest. The authors present the hardware ,procedures applied and initial results of inner ear structure visualisation. Several problems connected with processing of stereo-pair images are discussed.

  15. Color image encryption by using Yang-Gu mixture amplitude-phase retrieval algorithm in gyrator transform domain and two-dimensional Sine logistic modulation map

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Liu, Benqing; Wang, Qiang; Li, Ye; Liang, Junli

    2015-12-01

    A color image encryption scheme is proposed based on Yang-Gu mixture amplitude-phase retrieval algorithm and two-coupled logistic map in gyrator transform domain. First, the color plaintext image is decomposed into red, green and blue components, which are scrambled individually by three random sequences generated by using the two-dimensional Sine logistic modulation map. Second, each scrambled component is encrypted into a real-valued function with stationary white noise distribution in the iterative amplitude-phase retrieval process in the gyrator transform domain, and then three obtained functions are considered as red, green and blue channels to form the color ciphertext image. Obviously, the ciphertext image is real-valued function and more convenient for storing and transmitting. In the encryption and decryption processes, the chaotic random phase mask generated based on logistic map is employed as the phase key, which means that only the initial values are used as private key and the cryptosystem has high convenience on key management. Meanwhile, the security of the cryptosystem is enhanced greatly because of high sensitivity of the private keys. Simulation results are presented to prove the security and robustness of the proposed scheme.

  16. Osteoclast fusion is initiated by a small subset of RANKL-stimulated monocyte progenitors, which can fuse to RANKL-unstimulated progenitors.

    PubMed

    Levaot, Noam; Ottolenghi, Aner; Mann, Mati; Guterman-Ram, Gali; Kam, Zvi; Geiger, Benjamin

    2015-10-01

    Osteoclasts are multinucleated, bone-resorbing cells formed via fusion of monocyte progenitors, a process triggered by prolonged stimulation with RANKL, the osteoclast master regulator cytokine. Monocyte fusion into osteoclasts has been shown to play a key role in bone remodeling and homeostasis; therefore, aberrant fusion may be involved in a variety of bone diseases. Indeed, research in the last decade has led to the discovery of genes regulating osteoclast fusion; yet the basic cellular regulatory mechanism underlying the fusion process is poorly understood. Here, we applied a novel approach for tracking the fusion processes, using live-cell imaging of RANKL-stimulated and non-stimulated progenitor monocytes differentially expressing dsRED or GFP, respectively. We show that osteoclast fusion is initiated by a small (~2.4%) subset of precursors, termed "fusion founders", capable of fusing either with other founders or with non-stimulated progenitors (fusion followers), which alone, are unable to initiate fusion. Careful examination indicates that the fusion between a founder and a follower cell consists of two distinct phases: an initial pairing of the two cells, typically lasting 5-35 min, during which the cells nevertheless maintain their initial morphology; and the fusion event itself. Interestingly, during the initial pre-fusion phase, a transfer of the fluorescent reporter proteins from nucleus to nucleus was noticed, suggesting crosstalk between the founder and follower progenitors via the cytoplasm that might directly affect the fusion process, as well as overall transcriptional regulation in the developing heterokaryon. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Automatic segmentation for brain MR images via a convex optimized segmentation and bias field correction coupled model.

    PubMed

    Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui

    2014-09-01

    Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Image-Based Localization Aided Indoor Pedestrian Trajectory Estimation Using Smartphones

    PubMed Central

    Zhou, Yan; Zheng, Xianwei; Chen, Ruizhi; Xiong, Hanjiang; Guo, Sheng

    2018-01-01

    Accurately determining pedestrian location in indoor environments using consumer smartphones is a significant step in the development of ubiquitous localization services. Many different map-matching methods have been combined with pedestrian dead reckoning (PDR) to achieve low-cost and bias-free pedestrian tracking. However, this works only in areas with dense map constraints and the error accumulates in open areas. In order to achieve reliable localization without map constraints, an improved image-based localization aided pedestrian trajectory estimation method is proposed in this paper. The image-based localization recovers the pose of the camera from the 2D-3D correspondences between the 2D image positions and the 3D points of the scene model, previously reconstructed by a structure-from-motion (SfM) pipeline. This enables us to determine the initial location and eliminate the accumulative error of PDR when an image is successfully registered. However, the image is not always registered since the traditional 2D-to-3D matching rejects more and more correct matches when the scene becomes large. We thus adopt a robust image registration strategy that recovers initially unregistered images by integrating 3D-to-2D search. In the process, the visibility and co-visibility information is adopted to improve the efficiency when searching for the correspondences from both sides. The performance of the proposed method was evaluated through several experiments and the results demonstrate that it can offer highly acceptable pedestrian localization results in long-term tracking, with an error of only 0.56 m, without the need for dedicated infrastructures. PMID:29342123

  19. Automatic bone outer contour extraction from B-modes ultrasound images based on local phase symmetry and quadratic polynomial fitting

    NASA Astrophysics Data System (ADS)

    Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery

    2017-06-01

    Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.

  20. Computer object segmentation by nonlinear image enhancement, multidimensional clustering, and geometrically constrained contour optimization

    NASA Astrophysics Data System (ADS)

    Bruynooghe, Michel M.

    1998-04-01

    In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.

  1. Behavior of an MBT waste in monotonic triaxial shear tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhandari, Athma Ram, E-mail: athma.bhandari@beg.utexas.edu; Powrie, William, E-mail: w.powrie@soton.ac.uk

    2013-04-15

    Highlights: ► We studied the stress–strain–strength characteristics of an MBT waste. ► Rate of mobilization of strength with strain depends on initial density. ► Image analysis technique was used to determine whole-specimen displacement fields. ► Initial mode of deformation of a loose specimen is one-dimensional compression. ► Reinforcing elements enhance the resistance to lateral and volumetric deformation. - Abstract: Legislation in some parts of the world now requires municipal solid waste (MSW) to be processed prior to landfilling to reduce its biodegradability and hence its polluting potential through leachate and fugitive emission of greenhouse gases. This pre-processing may be achievedmore » through what is generically termed mechanical–biological-treatment (MBT). One of the major concerns relating to MBT wastes is that the strength of the material may be less than for raw MSW, owing to the removal of sheet, stick and string-like reinforcing elements during processing. Also, the gradual increase in mobilized strength over strains of 30% or so commonly associated with unprocessed municipal solid waste may not occur with treated wastes. This paper describes a series of triaxial tests carried out to investigate the stress–strain–strength characteristics of an MBT waste, using a novel digital image analysis technique for the determination of detailed displacement fields over the whole specimen. New insights gained into the mechanical behavior of MBT waste include the effect of density on the stress–strain response, the initial 1-D compression of lightly consolidated specimens, and the likely reinforcing effect of small sheet like particles remaining in the waste.« less

  2. Automatic joint alignment measurements in pre- and post-operative long leg standing radiographs.

    PubMed

    Goossen, A; Weber, G M; Dries, S P M

    2012-01-01

    For diagnosis or treatment assessment of knee joint osteoarthritis it is required to measure bone morphometry from radiographic images. We propose a method for automatic measurement of joint alignment from pre-operative as well as post-operative radiographs. In a two step approach we first detect and segment any implants or other artificial objects within the image. We exploit physical characteristics and avoid prior shape information to cope with the vast amount of implant types. Subsequently, we exploit the implant delineations to adapt the initialization and adaptation phase of a dedicated bone segmentation scheme using deformable template models. Implant and bone contours are fused to derive the final joint segmentation and thus the alignment measurements. We evaluated our method on clinical long leg radiographs and compared both the initialization rate, corresponding to the number of images successfully processed by the proposed algorithm, and the accuracy of the alignment measurement. Ground truth has been generated by an experienced orthopedic surgeon. For comparison a second reader reevaluated the measurements. Experiments on two sets of 70 and 120 digital radiographs show that 92% of the joints could be processed automatically and the derived measurements of the automatic method are comparable to a human reader for pre-operative as well as post-operative images with a typical error of 0.7° and correlations of r = 0.82 to r = 0.99 with the ground truth. The proposed method allows deriving objective measures of joint alignment from clinical radiographs. Its accuracy and precision are on par with a human reader for all evaluated measurements.

  3. Right hemisphere performance and competence in processing mental images, in a case of partial interhemispheric disconnection.

    PubMed

    Blanc-Garin, J; Faure, S; Sabio, P

    1993-05-01

    The objective of this study was to analyze dynamic aspects of right hemisphere implementation in processing visual images. Two tachistoscopic, divided visual field experiments were carried out on a partial split-brain patient with no damage to the right hemisphere. In the first experiment, image generation performance for letters presented in the right visual field (/left hemisphere) was undeniably optimal. In the left visual field (/right hemisphere), performance was no better than chance level at first, but then improved dramatically across stimulation blocks, in each of five successive sessions. This was interpreted as revealing the progressive spontaneous activation of the right hemisphere's competence not shown initially. The aim of the second experiment was to determine some conditions under which this pattern was obtained. The experimental design contrasted stimuli (words and pictures) and representational activity (phonologic and visuo-imaged processing). The right visual field (/left hemisphere: LH) elicited higher performance than the left visual field (/right hemisphere, RH) in the three situations where verbal activity was required. No superiority could be found when visual images were to be generated from pictures: parallel and weak improvement of both hemispheres was observed across sessions. Two other patterns were obtained: improvement in RH performance (although LH performance remained superior) and an unexpectedly large decrease in RH performance. These data are discussed in terms of RH cognitive competence and hemisphere implementation.

  4. Adaptive recovery of motion blur point spread function from differently exposed images

    NASA Astrophysics Data System (ADS)

    Albu, Felix; Florea, Corneliu; Drîmbarean, Alexandru; Zamfir, Adrian

    2010-01-01

    Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of images and many methods for camera motion removal have been developed. Central to all techniques is the correct recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred image to adaptively improve it. The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square (LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images. This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is improved compared to 2 gyro only approach or to blind image de-convolution results.

  5. Motor unit action potential conduction velocity estimated from surface electromyographic signals using image processing techniques.

    PubMed

    Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira

    2015-09-17

    In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.

  6. High-quality infrared imaging with graphene photodetectors at room temperature.

    PubMed

    Guo, Nan; Hu, Weida; Jiang, Tao; Gong, Fan; Luo, Wenjin; Qiu, Weicheng; Wang, Peng; Liu, Lu; Wu, Shiwei; Liao, Lei; Chen, Xiaoshuang; Lu, Wei

    2016-09-21

    Graphene, a two-dimensional material, is expected to enable broad-spectrum and high-speed photodetection because of its gapless band structure, ultrafast carrier dynamics and high mobility. We demonstrate a multispectral active infrared imaging by using a graphene photodetector based on hybrid response mechanisms at room temperature. The high-quality images with optical resolutions of 418 nm, 657 nm and 877 nm and close-to-theoretical-limit Michelson contrasts of 0.997, 0.994, and 0.996 have been acquired for 565 nm, 1550 nm, and 1815 nm light imaging measurements by using an unbiased graphene photodetector, respectively. Importantly, by carefully analyzing the results of Raman mapping and numerical simulations for the response process, the formation of hybrid photocurrents in graphene detectors is attributed to the synergistic action of photovoltaic and photo-thermoelectric effects. The initial application to infrared imaging will help promote the development of high performance graphene-based infrared multispectral detectors.

  7. Development of a Mobile User Interface for Image-based Dietary Assessment.

    PubMed

    Kim, Sungye; Schap, Tusarebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J; Ebert, David S; Boushey, Carol J

    2010-12-31

    In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records.

  8. Ultrasound image edge detection based on a novel multiplicative gradient and Canny operator.

    PubMed

    Zheng, Yinfei; Zhou, Yali; Zhou, Hao; Gong, Xiaohong

    2015-07-01

    To achieve the fast and accurate segmentation of ultrasound image, a novel edge detection method for speckle noised ultrasound images was proposed, which was based on the traditional Canny and a novel multiplicative gradient operator. The proposed technique combines a new multiplicative gradient operator of non-Newtonian type with the traditional Canny operator to generate the initial edge map, which is subsequently optimized by the following edge tracing step. To verify the proposed method, we compared it with several other edge detection methods that had good robustness to noise, with experiments on the simulated and in vivo medical ultrasound image. Experimental results showed that the proposed algorithm has higher speed for real-time processing, and the edge detection accuracy could be 75% or more. Thus, the proposed method is very suitable for fast and accurate edge detection of medical ultrasound images. © The Author(s) 2014.

  9. A scene-analysis approach to remote sensing. [San Francisco, California

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M. (Principal Investigator); Fischler, M. A.; Wolf, H. C.

    1978-01-01

    The author has identified the following significant results. Geometric correspondance between a sensed image and a symbolic map is established in an initial stage of processing by adjusting parameters of a sensed model so that the image features predicted from the map optimally match corresponding features extracted from the sensed image. Information in the map is then used to constrain where to look in an image, what to look for, and how to interpret what is seen. For simple monitoring tasks involving multispectral classification, these constraints significantly reduce computation, simplify interpretation, and improve the utility of the resulting information. Previously intractable tasks requiring spatial and textural analysis may become straightforward in the context established by the map knowledge. The use of map-guided image analysis in monitoring the volume of water in a reservoir, the number of boxcars in a railyard, and the number of ships in a harbor is demonstrated.

  10. Optical diagnostics of turbulent mixing in explosively-driven shock tube

    NASA Astrophysics Data System (ADS)

    Anderson, James; Hargather, Michael

    2016-11-01

    Explosively-driven shock tube experiments were performed to investigate the turbulent mixing of explosive product gases and ambient air. A small detonator initiated Al / I2O5 thermite, which produced a shock wave and expanding product gases. Schlieren and imaging spectroscopy were applied simultaneously along a common optical path to identify correlations between turbulent structures and spatially-resolved absorbance. The schlieren imaging identifies flow features including shock waves and turbulent structures while the imaging spectroscopy identifies regions of iodine gas presence in the product gases. Pressure transducers located before and after the optical diagnostic section measure time-resolved pressure. Shock speed is measured from tracking the leading edge of the shockwave in the schlieren images and from the pressure transducers. The turbulent mixing characteristics were determined using digital image processing. Results show changes in shock speed, product gas propagation, and species concentrations for varied explosive charge mass. Funded by DTRA Grant HDTRA1-14-1-0070.

  11. Rendering-based video-CT registration with physical constraints for image-guided endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.

    2015-03-01

    We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.

  12. Applying shot boundary detection for automated crystal growth analysis during in situ transmission electron microscope experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moeglein, W. A.; Griswold, R.; Mehdi, B. L.

    In-situ (scanning) transmission electron microscopy (S/TEM) is being developed for numerous applications in the study of nucleation and growth under electrochemical driving forces. For this type of experiment, one of the key parameters is to identify when nucleation initiates. Typically the process of identifying the moment that crystals begin to form is a manual process requiring the user to perform an observation and respond accordingly (adjust focus, magnification, translate the stage etc.). However, as the speed of the cameras being used to perform these observations increases, the ability of a user to “catch” the important initial stage of nucleation decreasesmore » (there is more information that is available in the first few milliseconds of the process). Here we show that video shot boundary detection (SBD) can automatically detect frames where a change in the image occurs. We show that this method can be applied to quickly and accurately identify points of change during crystal growth. This technique allows for automated segmentation of a digital stream for further analysis and the assignment of arbitrary time stamps for the initiation of processes that are independent of the user’s ability to observe and react.« less

  13. The effect of initial pressure on growth of FeNPs in amorphous carbon films

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Fatemeh; Shafiekhani, Azizollah; Sebt, S. Ali; Darabi, Elham

    2018-04-01

    Iron nanoparticles in amorphous hydrogenated carbon films (FeNPs@a-C:H) were prepared with RF-sputtering and RFPECVD methods by acetylene gas and Fe target. In this paper, deposition and sputtering process were carried out under influence of different initial pressure gas. The morphology and roughness of surface of samples were studied by AFM technique and also TEM images show the exact size of FeNPs and encapsulated FeNPs@a-C:H. The localized surface plasmon resonance peak (LSPR) of FeNPs was studied using UV-vis absorption spectrum. The results show that the intensity and position of LSPR peak are increased by increasing initial pressure. Also, direct energy gap of samples obtained by Tauc law is decreased with respect to increasing initial pressure.

  14. Processing of 3-Dimensional Flash Lidar Terrain Images Generated From an Airborne Platform

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Pierrottet, Diego; Amzajerdian, Farzin; Busch, George; Vanek, Michael; Reisse, Robert

    2009-01-01

    Data from the first Flight Test of the NASA Langley Flash Lidar system have been processed. Results of the analyses are presented and discussed. A digital elevation map of the test site is derived from the data, and is compared with the actual topography. The set of algorithms employed, starting from the initial data sorting, and continuing through to the final digital map classification is described. The accuracy, precision, and the spatial and angular resolution of the method are discussed.

  15. Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study.

    PubMed

    Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Takai, Yoshihiro; Yoshizawa, Makoto

    2015-05-01

    To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the tracking result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors' proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors' algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.

  16. Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoyong, E-mail: xiaoyong@ieee.org; Homma, Noriyasu, E-mail: homma@ieee.org; Ichiji, Kei, E-mail: ichiji@yoshizawa.ecei.tohoku.ac.jp

    2015-05-15

    Purpose: To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. Methods: A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the trackingmore » result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. Results: For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors’ proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. Conclusions: In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors’ algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.« less

  17. Three-dimensional reconstruction of rat knee joint using episcopic fluorescence image capture.

    PubMed

    Takaishi, R; Aoyama, T; Zhang, X; Higuchi, S; Yamada, S; Takakuwa, T

    2014-10-01

    Development of the knee joint was morphologically investigated, and the process of cavitation was analyzed by using episcopic fluorescence image capture (EFIC) to create spatial and temporal three-dimensional (3D) reconstructions. Knee joints of Wister rat embryos between embryonic day (E)14 and E20 were investigated. Samples were sectioned and visualized using an EFIC. Then, two-dimensional image stacks were reconstructed using OsiriX software, and 3D reconstructions were generated using Amira software. Cavitations of the knee joint were constructed from five divided portions. Cavity formation initiated at multiple sites at E17; among them, the femoropatellar cavity (FPC) was the first. Cavitations of the medial side preceded those of the lateral side. Each cavity connected at E20 when cavitations around the anterior cruciate ligament (ACL) and posterior cruciate ligament (PCL) were completed. Cavity formation initiated from six portions. In each portion, development proceeded asymmetrically. These results concerning anatomical development of the knee joint using EFIC contribute to a better understanding of the structural feature of the knee joint. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  18. Experimental results in autonomous landing approaches by dynamic machine vision

    NASA Astrophysics Data System (ADS)

    Dickmanns, Ernst D.; Werner, Stefan; Kraus, S.; Schell, R.

    1994-07-01

    The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.

  19. Large-field-of-view wide-spectrum artificial reflecting superposition compound eyes

    NASA Astrophysics Data System (ADS)

    Huang, Chi-Chieh

    The study of the imaging principles of natural compound eyes has become an active area of research and has fueled the advancement of modern optics with many attractive design features beyond those available with conventional technologies. Most prominent among all compound eyes is the reflecting superposition compound eyes (RSCEs) found in some decapods. They are extraordinary imaging systems with numerous optical features such as minimum chromatic aberration, wide-angle field of view (FOV), high sensitivity to light and superb acuity to motion. Inspired by their remarkable visual system, we were able to implement the unique lens-free, reflection-based imaging mechanisms into a miniaturized, large-FOV optical imaging device operating at the wide visible spectrum to minimize chromatic aberration without any additional post-image processing. First, two micro-transfer printing methods, a multiple and a shear-assisted transfer printing technique, were studied and discussed to realize life-sized artificial RSCEs. The processes exploited the differential adhesive tendencies of the microstructures formed between a donor and a transfer substrate to accomplish an efficient release and transfer process. These techniques enabled conformal wrapping of three-dimensional (3-D) microstructures, initially fabricated in two-dimensional (2-D) layouts with standard fabrication technology onto a wide range of surfaces with complex and curvilinear shapes. Final part of this dissertation was focused on implementing the key operational features of the natural RSCEs into large-FOV, wide-spectrum artificial RSCEs as an optical imaging device suitable for the wide visible spectrum. Our devices can form real, clear images based on reflection rather than refraction, hence avoiding chromatic aberration due to dispersion by the optical materials. Compared to the performance of conventional refractive lenses of comparable size, our devices demonstrated minimum chromatic aberration, exceptional FOV up to 165o without distortion, modest spherical aberrations and comparable imaging quality without any post-image processing. Together with an augmenting cruciform pattern surrounding each focused image, our devices possessed enhanced, dynamic motion-tracking capability ideal for diverse applications in military, security, search and rescue, night navigation, medical imaging and astronomy. In the future, due to its reflection-based operating principles, it can be further extended into mid- and far-infrared for more demanding applications.

  20. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  1. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  2. Shear Stress Sensing with Elastic Microfence Structures

    NASA Technical Reports Server (NTRS)

    Cisotto, Alexxandra; Palmieri, Frank L.; Saini, Aditya; Lin, Yi; Thurman, Christopher S; Kim, Jinwook; Kim, Taeyang; Connell, John W.; Zhu, Yong; Gopalarathnam, Ashok; hide

    2015-01-01

    In this work, elastic microfences were generated for the purpose of measuring shear forces acting on a wind tunnel model. The microfences were fabricated in a two part process involving laser ablation patterning to generate a template in a polymer film followed by soft lithography with a two-part silicone. Incorporation of a fluorescent dye was demonstrated as a method to enhance contrast between the sensing elements and the substrate. Sensing elements consisted of multiple microfences prepared at different orientations to enable determination of both shear force and directionality. Microfence arrays were integrated into an optical microscope with sub-micrometer resolution. Initial experiments were conducted on a flat plate wind tunnel model. Both image stabilization algorithms and digital image correlation were utilized to determine the amount of fence deflection as a result of airflow. Initial free jet experiments indicated that the microfences could be readily displaced and this displacement was recorded through the microscope.

  3. Path (un)predictability of two interacting cracks in polycarbonate sheets using Digital Image Correlation.

    PubMed

    Koivisto, J; Dalbe, M-J; Alava, M J; Santucci, S

    2016-08-31

    Crack propagation is tracked here with Digital Image Correlation analysis in the test case of two cracks propagating in opposite directions in polycarbonate, a material with high ductility and a large Fracture Process Zone (FPZ). Depending on the initial distances between the two crack tips, one may observe different complex crack paths with in particular a regime where the two cracks repel each other prior to being attracted. We show by strain field analysis how this can be understood according to the principle of local symmetry: the propagation is to the direction where the local shear - mode KII in fracture mechanics language - is zero. Thus the interactions exhibited by the cracks arise from symmetry, from the initial geometry, and from the material properties which induce the FPZ. This complexity makes any long-range prediction of the path(s) impossible.

  4. NASA to Survey Earth's Resources

    NASA Technical Reports Server (NTRS)

    Mittauer, R. T.

    1971-01-01

    A wide variety of the natural resources of earth and man's management of them will be studied by an initial group of foreign and domestic scientists tentatively chosen by the National Aeronautics and Space Administration to analyze data to be gathered by two earth-orbiting spacecraft. The spacecraft are the first Earth Resources Technology Satellite (ERTS-A) and the manned Skylab which will carry an Earth Resources Experiment Package (EREP). In the United States, the initial experiments will study the feasibility of remote sensing from a satellite in gathering information on ecological problems. The objective of both ERTS and EREP aboard Skylab is to obtain multispectral images of the surface of the earth with high resolution remote sensors and to process and distribute the images to scientific users in a wide variety of disciplines. The ERTS-A, EREP, and Skylab systems are described and their operation is discussed.

  5. Dissociating motivational direction and affective valence: specific emotions alter central motor processes.

    PubMed

    Coombes, Stephen A; Cauraugh, James H; Janelle, Christopher M

    2007-11-01

    We aimed to clarify the relation between affective valence and motivational direction by specifying how central and peripheral components of extension movements are altered according to specific unpleasant affective states. As predicted, premotor reaction time was quicker for extension movements initiated during exposure to attack than for extension movements initiated during exposure to all other valence categories (mutilation, erotic couples, opposite-sex nudes, neutral humans, household objects, blank). Exposure to erotic couples and mutilations yielded greater peak force than exposure to images of attack, neutral humans, and household objects. Finally, motor reaction time and peak electromyographic amplitude were not altered by valence. These findings indicate that unpleasant states do not unilaterally prime withdrawal movements, and that the quick execution of extension movements during exposure to threatening images is due to rapid premotor, rather than motor, reaction time. Collectively, our findings support the call for dissociating motivational direction and affective valence.

  6. System for verifiable CT radiation dose optimization based on image quality. part II. process control system.

    PubMed

    Larson, David B; Malarik, Remo J; Hall, Seth M; Podberesky, Daniel J

    2013-10-01

    To evaluate the effect of an automated computed tomography (CT) radiation dose optimization and process control system on the consistency of estimated image noise and size-specific dose estimates (SSDEs) of radiation in CT examinations of the chest, abdomen, and pelvis. This quality improvement project was determined not to constitute human subject research. An automated system was developed to analyze each examination immediately after completion, and to report individual axial-image-level and study-level summary data for patient size, image noise, and SSDE. The system acquired data for 4 months beginning October 1, 2011. Protocol changes were made by using parameters recommended by the prediction application, and 3 months of additional data were acquired. Preimplementation and postimplementation mean image noise and SSDE were compared by using unpaired t tests and F tests. Common-cause variation was differentiated from special-cause variation by using a statistical process control individual chart. A total of 817 CT examinations, 490 acquired before and 327 acquired after the initial protocol changes, were included in the study. Mean patient age and water-equivalent diameter were 12.0 years and 23.0 cm, respectively. The difference between actual and target noise increased from -1.4 to 0.3 HU (P < .01) and the standard deviation decreased from 3.9 to 1.6 HU (P < .01). Mean SSDE decreased from 11.9 to 7.5 mGy, a 37% reduction (P < .01). The process control chart identified several special causes of variation. Implementation of an automated CT radiation dose optimization system led to verifiable simultaneous decrease in image noise variation and SSDE. The automated nature of the system provides the opportunity for consistent CT radiation dose optimization on a broad scale. © RSNA, 2013.

  7. Improved 3D seismic images of dynamic deformation in the Nankai Trough off Kumano

    NASA Astrophysics Data System (ADS)

    Shiraishi, K.; Moore, G. F.; Yamada, Y.; Kinoshita, M.; Sanada, Y.; Kimura, G.

    2016-12-01

    In order to improve the seismic reflection image of dynamic deformation and seismogenic faults in the Nankai trough, the 2006 Kumano 3D seismic dataset was reprocessed from the original field records by applying advanced technologies a decade after the data acquisition and initial processing. The 3D seismic survey revealed the geometry of megasplay fault system. However, there were still unclear regions in the accretionary prism beneath from Kumano basin to the outer ridge, because of sea floor multiple reflections and noise caused by the Kuroshio current. For the next stage of deep scientific drilling into the Nankai trough seismogenic zone, it is essential to know exactly the shape and depth of the megasplay, and fine structures around the drilling site. Three important improvements were achieved in data processing before imaging. First, full deghosting and optimized zero phasing techniques could recover broadband signals, especially in low frequency, by compensating for ghost effects at both source and receiver, and removing source bubbles. Second, the multiple reflections better attenuated by applying advanced techniques in combination, and the strong noise caused by the Kuroshio were attenuated carefully. Third, data regularization by means of the optimized 4D trace interpolation was effective both to mitigate non-uniform fold distribution and to improve data quality. Further imaging processes led to obvious improvement from previous results by applying PSTM with higher order correction of VTI anisotropy, and PSDM based on the velocity model built by reflection tomography with TTI anisotropy. Final reflection images show new geological aspects, such as clear steep dip faults around the "notch", and fine scale faults related to main thrusts in frontal thrust zone. The improved images will highly contribute to understanding the deformation process in the old accretionary prism and seismogenic features related to the megasplay faults.

  8. A Generic Ground Framework for Image Expertise Centres and Small-Sized Production Centres

    NASA Astrophysics Data System (ADS)

    Sellé, A.

    2009-05-01

    Initiated by the Pleiadas Earth Observation Program, the CNES (French Space Agency) has developed a generic collaborative framework for its image quality centre, highly customisable for any upcoming expertise centre. This collaborative framework has been design to be used by a group of experts or scientists that want to share data and processings and manage interfaces with external entities. Its flexible and scalable architecture complies with the core requirements: defining a user data model with no impact on the software (generic access data), integrating user processings with a GUI builder and built-in APIs, and offering a scalable architecture to fit any preformance requirement and accompany growing projects. The CNES jas given licensing grants for two software companies that will be able to redistribute this framework to any customer.

  9. A novel method of the image processing on irregular triangular meshes

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  10. The Geoscience Spaceborne Imaging Spectroscopy Technical Committees Calibration and Validation Workshop

    NASA Technical Reports Server (NTRS)

    Ong, Cindy; Mueller, Andreas; Thome, Kurtis; Pierce, Leland E.; Malthus, Timothy

    2016-01-01

    Calibration is the process of quantitatively defining a system's responses to known, controlled signal inputs, and validation is the process of assessing, by independent means, the quality of the data products derived from those system outputs [1]. Similar to other Earth observation (EO) sensors, the calibration and validation of spaceborne imaging spectroscopy sensors is a fundamental underpinning activity. Calibration and validation determine the quality and integrity of the data provided by spaceborne imaging spectroscopy sensors and have enormous downstream impacts on the accuracy and reliability of products generated from these sensors. At least five imaging spectroscopy satellites are planned to be launched within the next five years, with the two most advanced scheduled to be launched in the next two years [2]. The launch of these sensors requires the establishment of suitable, standardized, and harmonized calibration and validation strategies to ensure that high-quality data are acquired and comparable between these sensor systems. Such activities are extremely important for the community of imaging spectroscopy users. Recognizing the need to focus on this underpinning topic, the Geoscience Spaceborne Imaging Spectroscopy (previously, the International Spaceborne Imaging Spectroscopy) Technical Committee launched a calibration and validation initiative at the 2013 International Geoscience and Remote Sensing Symposium (IGARSS) in Melbourne, Australia, and a post-conference activity of a vicarious calibration field trip at Lake Lefroy in Western Australia.

  11. Design of a decision support system, trained on GPU, for assisting melanoma diagnosis in dermatoscopy images

    NASA Astrophysics Data System (ADS)

    Glotsos, Dimitris; Kostopoulos, Spiros; Lalissidou, Stella; Sidiropoulos, Konstantinos; Asvestas, Pantelis; Konstandinou, Christos; Xenogiannopoulos, George; Konstantina Nikolatou, Eirini; Perakis, Konstantinos; Bouras, Thanassis; Cavouras, Dionisis

    2015-09-01

    The purpose of this study was to design a decision support system for assisting the diagnosis of melanoma in dermatoscopy images. Clinical material comprised images of 44 dysplastic (clark's nevi) and 44 malignant melanoma lesions, obtained from the dermatology database Dermnet. Initially, images were processed for hair removal and background correction using the Dull Razor algorithm. Processed images were segmented to isolate moles from surrounding background, using a combination of level sets and an automated thresholding approach. Morphological (area, size, shape) and textural features (first and second order) were calculated from each one of the segmented moles. Extracted features were fed to a pattern recognition system assembled with the Probabilistic Neural Network Classifier, which was trained to distinguish between benign and malignant cases, using the exhaustive search and the leave one out method. The system was designed on the GPU card (GeForce 580GTX) using CUDA programming framework and C++ programming language. Results showed that the designed system discriminated benign from malignant moles with 88.6% accuracy employing morphological and textural features. The proposed system could be used for analysing moles depicted on smart phone images after appropriate training with smartphone images cases. This could assist towards early detection of melanoma cases, if suspicious moles were to be captured on smartphone by patients and be transferred to the physician together with an assessment of the mole's nature.

  12. Assessing the use of an infrared spectrum hyperpixel array imager to measure temperature during additive and subtractive manufacturing

    NASA Astrophysics Data System (ADS)

    Whitenton, Eric; Heigel, Jarred; Lane, Brandon; Moylan, Shawn

    2016-05-01

    Accurate non-contact temperature measurement is important to optimize manufacturing processes. This applies to both additive (3D printing) and subtractive (material removal by machining) manufacturing. Performing accurate single wavelength thermography suffers numerous challenges. A potential alternative is hyperpixel array hyperspectral imaging. Focusing on metals, this paper discusses issues involved such as unknown or changing emissivity, inaccurate greybody assumptions, motion blur, and size of source effects. The algorithm which converts measured thermal spectra to emissivity and temperature uses a customized multistep non-linear equation solver to determine the best-fit emission curve. Emissivity dependence on wavelength may be assumed uniform or have a relationship typical for metals. The custom software displays residuals for intensity, temperature, and emissivity to gauge the correctness of the greybody assumption. Initial results are shown from a laser powder-bed fusion additive process, as well as a machining process. In addition, the effects of motion blur are analyzed, which occurs in both additive and subtractive manufacturing processes. In a laser powder-bed fusion additive process, the scanning laser causes the melt pool to move rapidly, causing a motion blur-like effect. In machining, measuring temperature of the rapidly moving chip is a desirable goal to develop and validate simulations of the cutting process. A moving slit target is imaged to characterize how the measured temperature values are affected by motion of a measured target.

  13. Quality initiatives: planning, setting up, and carrying out radiology process improvement projects.

    PubMed

    Tamm, Eric P; Szklaruk, Janio; Puthooran, Leejo; Stone, Danna; Stevens, Brian L; Modaro, Cathy

    2012-01-01

    In the coming decades, those who provide radiologic imaging services will be increasingly challenged by the economic, demographic, and political forces affecting healthcare to improve their efficiency, enhance the value of their services, and achieve greater customer satisfaction. It is essential that radiologists master and consistently apply basic process improvement skills that have allowed professionals in many other fields to thrive in a competitive environment. The authors provide a step-by-step overview of process improvement from the perspective of a radiologic imaging practice by describing their experience in conducting a process improvement project: to increase the daily volume of body magnetic resonance imaging examinations performed at their institution. The first step in any process improvement project is to identify and prioritize opportunities for improvement in the work process. Next, an effective project team must be formed that includes representatives of all participants in the process. An achievable aim must be formulated, appropriate measures selected, and baseline data collected to determine the effects of subsequent efforts to achieve the aim. Each aspect of the process in question is then analyzed by using appropriate tools (eg, flowcharts, fishbone diagrams, Pareto diagrams) to identify opportunities for beneficial change. Plans for change are then established and implemented with regular measurements and review followed by necessary adjustments in course. These so-called PDSA (planning, doing, studying, and acting) cycles are repeated until the aim is achieved or modified and the project closed.

  14. A quality-refinement process for medical imaging applications.

    PubMed

    Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I

    2009-01-01

    To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.

  15. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  16. Automation of the Image Analysis for Thermographic Inspection

    NASA Technical Reports Server (NTRS)

    Plotnikov, Yuri A.; Winfree, William P.

    1998-01-01

    Several data processing procedures for the pulse thermal inspection require preliminary determination of an unflawed region. Typically, an initial analysis of the thermal images is performed by an operator to determine the locations of unflawed and the defective areas. In the present work an algorithm is developed for automatically determining a reference point corresponding to an unflawed region. Results are obtained for defects which are arbitrarily located in the inspection region. A comparison is presented of the distributions of derived values with right and wrong localization of the reference point. Different algorithms of automatic determination of the reference point are compared.

  17. Exploring the Largest Mass Fraction of the Solar System: the Case for Planetary Interiors

    NASA Technical Reports Server (NTRS)

    Danielson, L. R.; Draper, D.; Righter, K.; McCubbin, F.; Boyce, J.

    2017-01-01

    Why explore planetary interiors: The typical image that comes to mind for planetary science is that of a planet surface. And while surface data drive our exploration of evolved geologic processes, it is the interiors of planets that hold the key to planetary origins via accretionary and early differentiation processes. It is that initial setting of the bulk planet composition that sets the stage for all geologic processes that follow. But nearly all of the mass of planets is inaccessible to direct examination, making experimentation an absolute necessity for full planetary exploration.

  18. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    PubMed

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-10-01

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. An integrated one-step system to extract, analyze and annotate all relevant information from image-based cell screening of chemical libraries.

    PubMed

    Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen

    2010-04-01

    Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.

  20. Iapetus: Unique Surface Properties and a Global Color Dichotomy from Cassini Imaging

    NASA Astrophysics Data System (ADS)

    Denk, Tilmann; Neukum, Gerhard; Roatsch, Thomas; Porco, Carolyn C.; Burns, Joseph A.; Galuba, Götz G.; Schmedemann, Nico; Helfenstein, Paul; Thomas, Peter C.; Wagner, Roland J.; West, Robert A.

    2010-01-01

    Since 2004, Saturn’s moon Iapetus has been observed repeatedly with the Imaging Science Subsystem of the Cassini spacecraft. The images show numerous impact craters down to the resolution limit of ~10 meters per pixel. Small, bright craters within the dark hemisphere indicate a dark blanket thickness on the order of meters or less. Dark, equator-facing and bright, poleward-facing crater walls suggest temperature-driven water-ice sublimation as the process responsible for local albedo patterns. Imaging data also reveal a global color dichotomy, wherein both dark and bright materials on the leading side have a substantially redder color than the respective trailing-side materials. This global pattern indicates an exogenic origin for the redder leading-side parts and suggests that the global color dichotomy initiated the thermal formation of the global albedo dichotomy.

  1. Iapetus: unique surface properties and a global color dichotomy from Cassini imaging.

    PubMed

    Denk, Tilmann; Neukum, Gerhard; Roatsch, Thomas; Porco, Carolyn C; Burns, Joseph A; Galuba, Götz G; Schmedemann, Nico; Helfenstein, Paul; Thomas, Peter C; Wagner, Roland J; West, Robert A

    2010-01-22

    Since 2004, Saturn's moon Iapetus has been observed repeatedly with the Imaging Science Subsystem of the Cassini spacecraft. The images show numerous impact craters down to the resolution limit of approximately 10 meters per pixel. Small, bright craters within the dark hemisphere indicate a dark blanket thickness on the order of meters or less. Dark, equator-facing and bright, poleward-facing crater walls suggest temperature-driven water-ice sublimation as the process responsible for local albedo patterns. Imaging data also reveal a global color dichotomy, wherein both dark and bright materials on the leading side have a substantially redder color than the respective trailing-side materials. This global pattern indicates an exogenic origin for the redder leading-side parts and suggests that the global color dichotomy initiated the thermal formation of the global albedo dichotomy.

  2. Effective and basic business strategic tools to overcome the DRA impact in outpatient imaging centers.

    PubMed

    Cerdena, Ernesto A; Corigliano, Barbara A

    2007-01-01

    The implementation of the Deficit Reduction Act (DRA) of 2005 has had adverse impacts with freestanding imaging centers and independent diagnostic testing facilities (IDTF) throughout the nation, including patient's access to quality imaging as well as crippling an organization's bottom line. Basic but effective business strategic tools should be formulated and executed to overcome the negative impact of the DRA. This should include creative and innovative process improvement initiatives while reducing operational costs and optimizing staff, thus improving profitability. Radiology administrators should act as facilitators to articulate and instill the mission, core values, and vision of the organization to the staff. Equally important, leaders in the imaging industry need to manifest a strong commitment in bringing the center into a whole new paradigm shift towards excellence and effective business operations.

  3. X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers

    DOE PAGES

    Willey, T. M.; Champley, K.; Hodgin, R.; ...

    2016-06-17

    Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. The work described here outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ~80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst,more » the 2 nd images the flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3 rd frame captures the flyer in flight, while the 4 th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.« less

  4. X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers

    NASA Astrophysics Data System (ADS)

    Willey, T. M.; Champley, K.; Hodgin, R.; Lauderbach, L.; Bagge-Hansen, M.; May, C.; Sanchez, N.; Jensen, B. J.; Iverson, A.; van Buuren, T.

    2016-06-01

    Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. This work outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ˜80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst, the 2nd images the flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3rd frame captures the flyer in flight, while the 4th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.

  5. X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willey, T. M., E-mail: willey1@llnl.gov; Champley, K., E-mail: champley1@llnl.gov; Hodgin, R.

    Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. This work outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ∼80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst, the 2nd images themore » flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3rd frame captures the flyer in flight, while the 4th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.« less

  6. X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willey, T. M.; Champley, K.; Hodgin, R.

    Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. The work described here outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ~80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst,more » the 2 nd images the flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3 rd frame captures the flyer in flight, while the 4 th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.« less

  7. A SmallSat Approach for Global Imaging Spectroscopy of the Earth SYSTEM Enabled by Advanced Technology

    NASA Astrophysics Data System (ADS)

    Green, R. O.; Asner, G. P.; Thompson, D. R.; Mouroulis, P.; Eastwood, M. L.; Chien, S.

    2017-12-01

    Global coverage imaging spectroscopy in the solar reflected energy portion of the spectrum has been identified by the Earth Decadal Survey as an important measurement that enables a diverse set of new and time critical science objectives/targets for the Earth system. These science objectives include biodiversity; ecosystem function; ecosystem biogeochemistry; initialization and constraint of global ecosystem models; fire fuel, combustion, burn severity, and recovery; surface mineralogy, geochemistry, geologic processes, soils, and hazards; global mineral dust source composition; cryospheric albedo, energy balance, and melting; coastal and inland water habitats; coral reefs; point source gas emission; cloud thermodynamic phase; urban system properties; and more. Traceability of these science objectives to spectroscopic measurement in the visible to short wavelength infrared portion of the spectrum is summarized. New approaches, including satellite constellations, to acquire these global imaging spectroscopy measurements is presented drawing from recent advances in optical design, detector technology, instrument architecture, thermal control, on-board processing, data storage, and downlink.

  8. In-situ visual observation for the formation and dissociation of methane hydrates in porous media by magnetic resonance imaging.

    PubMed

    Zhao, Jiafei; Lv, Qin; Li, Yanghui; Yang, Mingjun; Liu, Weiguo; Yao, Lei; Wang, Shenglong; Zhang, Yi; Song, Yongchen

    2015-05-01

    In this work, magnetic resonance imaging (MRI) was employed to observe the in-situ formation and dissociation of methane hydrates in porous media. Methane hydrate was formed in a high-pressure cell with controlled temperature, and then the hydrate was dissociated by thermal injection. The process was photographed by the MRI, and the pressure was recorded. The images confirmed that the direct visual observation was achieved; these were then employed to provide detailed information of the nucleation, growth, and decomposition of the hydrate. Moreover, the saturation of methane hydrate during the dissociation was obtained from the MRI intensity data. Our results showed that the hydrate saturation initially decreased rapidly, and then slowed down; this finding is in line with predictions based only on pressure. The study clearly showed that MRI is a useful technique to investigate the process of methane hydrate formation and dissociation in porous media. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. A CCD-based reader combined with CdS quantum dot-labeled lateral flow strips for ultrasensitive quantitative detection of CagA

    NASA Astrophysics Data System (ADS)

    Gui, Chen; Wang, Kan; Li, Chao; Dai, Xuan; Cui, Daxiang

    2014-02-01

    Immunochromatographic assays are widely used to detect many analytes. CagA is proved to be associated closely with initiation of gastric carcinoma. Here, we reported that a charge-coupled device (CCD)-based test strip reader combined with CdS quantum dot-labeled lateral flow strips for quantitative detection of CagA was developed, which used 365-nm ultraviolet LED as the excitation light source, and captured the test strip images through an acquisition module. Then, the captured image was transferred to the computer and was processed by a software system. A revised weighted threshold histogram equalization (WTHE) image processing algorithm was applied to analyze the result. CdS quantum dot-labeled lateral flow strips for detection of CagA were prepared. One hundred sera samples from clinical patients with gastric cancer and healthy people were prepared for detection, which demonstrated that the device could realize rapid, stable, and point-of-care detection, with a sensitivity of 20 pg/mL.

  10. Interferometric Imaging Directly with Closure Phases and Closure Amplitudes

    NASA Astrophysics Data System (ADS)

    Chael, Andrew A.; Johnson, Michael D.; Bouman, Katherine L.; Blackburn, Lindy L.; Akiyama, Kazunori; Narayan, Ramesh

    2018-04-01

    Interferometric imaging now achieves angular resolutions as fine as ∼10 μas, probing scales that are inaccessible to single telescopes. Traditional synthesis imaging methods require calibrated visibilities; however, interferometric calibration is challenging, especially at high frequencies. Nevertheless, most studies present only a single image of their data after a process of “self-calibration,” an iterative procedure where the initial image and calibration assumptions can significantly influence the final image. We present a method for efficient interferometric imaging directly using only closure amplitudes and closure phases, which are immune to station-based calibration errors. Closure-only imaging provides results that are as noncommittal as possible and allows for reconstructing an image independently from separate amplitude and phase self-calibration. While closure-only imaging eliminates some image information (e.g., the total image flux density and the image centroid), this information can be recovered through a small number of additional constraints. We demonstrate that closure-only imaging can produce high-fidelity results, even for sparse arrays such as the Event Horizon Telescope, and that the resulting images are independent of the level of systematic amplitude error. We apply closure imaging to VLBA and ALMA data and show that it is capable of matching or exceeding the performance of traditional self-calibration and CLEAN for these data sets.

  11. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang

    2013-01-15

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle.more » While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The standard deviations of the results from the 6 researchers are 2.3 and 2.6 pixels. The proposed framework takes about 128 ms to detect four markers in the first MV images and about 23 ms to track these markers in each of the subsequent images. Conclusions: The unified framework for tracking of multiple markers presented here can achieve marker detection accuracy similar to manual detection even in low-contrast cine-MV images. It can cope with shape deformations of fiducial markers at different gantry angles. The fast processing speed reduces the image processing portion of the system latency, therefore can improve the performance of real-time motion compensation.« less

  12. Correlation between Charge Contrast Imaging and the Distribution of Some Trace Level Impurities in Gibbsite

    NASA Astrophysics Data System (ADS)

    Baroni, Travis C.; Griffin, Brendan J.; Browne, James R.; Lincoln, Frank J.

    2000-01-01

    Charge contrast images (CCI) of synthetic gibbsite obtained on an environmental scanning electron microscope gives information on the crystallization process. Furthermore, X-ray mapping of the same grains shows that impurities are localized during the initial stages of growth and that the resulting composition images have features similar to these observed in CCI. This suggests a possible correlation between impurity distributions and the emission detected during CCI. X-ray line profiles, simulating the spatial distribution of impurities derived from the Monte Carlo program CASINO, have been compared with experimental line profiles and give an estimate of the localization. The model suggests that a main impurity, Ca, is depleted from the solution within approximately 3 4 [mu]m of growth.

  13. Ab initio Simulation of Helium-Ion Microscopy Images: The Case of Suspended Graphene

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Miyamoto, Yoshiyuki; Rubio, Angel

    2012-12-01

    Helium ion microscopy (HIM), which was released in 2006 by Ward et al., provides nondestructive imaging of nanoscale objects with higher contrast than scanning electron microscopy. HIM measurement of suspended graphene under typical conditions is simulated by first-principles time-dependent density functional theory and the 30 keV He+ collision is found to induce the emission of electrons dependent on the impact point. This finding suggests the possibility of obtaining a highly accurate image of the honeycomb pattern of suspended graphene by HIM. Comparison with a simulation of He0 under the same kinetic energy shows that electron emission is governed by the impact ionization instead of Auger process initiated by neutralization of He+.

  14. Acute Severe Aortic Regurgitation: Imaging with Pathological Correlation.

    PubMed

    Janardhanan, Rajesh; Pasha, Ahmed Khurshid

    2016-03-01

    Acute aortic regurgitation (AR) is an important finding associated with a wide variety of disease processes. Its timely diagnosis is of utmost importance. Delay in diagnosis could prove fatal. We describe a case of acute severe AR that was timely diagnosed using real time three-dimensional (3D) transesophageal echocardiogram (3D TEE). Not only did it diagnose but also the images obtained by 3D TEE clearly matched with the pathologic specimen. Using this sophisticated imaging modality that is mostly available at the tertiary centers helped in the timely diagnosis, which lead to the optimal management saving his life. Echocardiography and especially 3D TEE can diagnose AR very accurately. Surgical intervention is the definitive treatment but medical therapy is utilized to stabilize the patient initially.

  15. Development of a semi-automated combined PET and CT lung lesion segmentation framework

    NASA Astrophysics Data System (ADS)

    Rossi, Farli; Mokri, Siti Salasiah; Rahni, Ashrani Aizzuddin Abd.

    2017-03-01

    Segmentation is one of the most important steps in automated medical diagnosis applications, which affects the accuracy of the overall system. In this paper, we propose a semi-automated segmentation method for extracting lung lesions from thoracic PET/CT images by combining low level processing and active contour techniques. The lesions are first segmented in PET images which are first converted to standardised uptake values (SUVs). The segmented PET images then serve as an initial contour for subsequent active contour segmentation of corresponding CT images. To evaluate its accuracy, the Jaccard Index (JI) was used as a measure of the accuracy of the segmented lesion compared to alternative segmentations from the QIN lung CT segmentation challenge, which is possible by registering the whole body PET/CT images to the corresponding thoracic CT images. The results show that our proposed technique has acceptable accuracy in lung lesion segmentation with JI values of around 0.8, especially when considering the variability of the alternative segmentations.

  16. Development of tyrosinase-based reporter genes for preclinical photoacoustic imaging of mesenchymal stem cells

    NASA Astrophysics Data System (ADS)

    Märk, Julia; Ruschke, Karen; Dortay, Hakan; Schreiber, Isabelle; Sass, Andrea; Qazi, Taimoor; Pumberger, Matthias; Laufer, Jan

    2014-03-01

    The capability to image stem cells in vivo in small animal models over extended periods of time is important to furthering our understanding of the processes involved in tissue regeneration. Photoacoustic imaging is suited to this application as it can provide high resolution (tens of microns) absorption-based images of superficial tissues (cm depths). However, stem cells are rare, highly migratory, and can divide into more specialised cells. Genetic labelling strategies are therefore advantageous for their visualisation. In this study, methods for the transfection and viral transduction of mesenchymal stem cells with reporter genes for the co-expression of tyrosinase and a fluorescent protein (mCherry). Initial photoacoustic imaging experiments of tyrosinase expressing cells in small animal models of tissue regeneration were also conducted. Lentiviral transduction methods were shown to result in stable expression of tyrosinase and mCherry in mesenchymal stem cells. The results suggest that photoacoustic imaging using reporter genes is suitable for the study of stem cell driven tissue regeneration in small animals.

  17. Mechanical Model Analysis for Quantitative Evaluation of Liver Fibrosis Based on Ultrasound Tissue Elasticity Imaging

    NASA Astrophysics Data System (ADS)

    Shiina, Tsuyoshi; Maki, Tomonori; Yamakawa, Makoto; Mitake, Tsuyoshi; Kudo, Masatoshi; Fujimoto, Kenji

    2012-07-01

    Precise evaluation of the stage of chronic hepatitis C with respect to fibrosis has become an important issue to prevent the occurrence of cirrhosis and to initiate appropriate therapeutic intervention such as viral eradication using interferon. Ultrasound tissue elasticity imaging, i.e., elastography can visualize tissue hardness/softness, and its clinical usefulness has been studied to detect and evaluate tumors. We have recently reported that the texture of elasticity image changes as fibrosis progresses. To evaluate fibrosis progression quantitatively on the basis of ultrasound tissue elasticity imaging, we introduced a mechanical model of fibrosis progression and simulated the process by which hepatic fibrosis affects elasticity images and compared the results with those clinical data analysis. As a result, it was confirmed that even in diffuse diseases like chronic hepatitis, the patterns of elasticity images are related to fibrous structural changes caused by hepatic disease and can be used to derive features for quantitative evaluation of fibrosis stage.

  18. Use of focus measure operators for characterization of flood illumination adaptive optics ophthalmoscopy image quality

    PubMed Central

    Alonso-Caneiro, David; Sampson, Danuta M.; Chew, Avenell L.; Collins, Michael J.; Chen, Fred K.

    2018-01-01

    Adaptive optics flood illumination ophthalmoscopy (AO-FIO) allows imaging of the cone photoreceptor in the living human retina. However, clinical interpretation of the AO-FIO image remains challenging due to suboptimal quality arising from residual uncorrected wavefront aberrations and rapid eye motion. An objective method of assessing image quality is necessary to determine whether an AO-FIO image is suitable for grading and diagnostic purpose. In this work, we explore the use of focus measure operators as a surrogate measure of AO-FIO image quality. A set of operators are tested on data sets acquired at different focal depths and different retinal locations from healthy volunteers. Our results demonstrate differences in focus measure operator performance in quantifying AO-FIO image quality. Further, we discuss the potential application of the selected focus operators in (i) selection of the best quality AO-FIO image from a series of images collected at the same retinal location and (ii) assessment of longitudinal changes in the diseased retina. Focus function could be incorporated into real-time AO-FIO image processing and provide an initial automated quality assessment during image acquisition or reading center grading. PMID:29552404

  19. Use of focus measure operators for characterization of flood illumination adaptive optics ophthalmoscopy image quality.

    PubMed

    Alonso-Caneiro, David; Sampson, Danuta M; Chew, Avenell L; Collins, Michael J; Chen, Fred K

    2018-02-01

    Adaptive optics flood illumination ophthalmoscopy (AO-FIO) allows imaging of the cone photoreceptor in the living human retina. However, clinical interpretation of the AO-FIO image remains challenging due to suboptimal quality arising from residual uncorrected wavefront aberrations and rapid eye motion. An objective method of assessing image quality is necessary to determine whether an AO-FIO image is suitable for grading and diagnostic purpose. In this work, we explore the use of focus measure operators as a surrogate measure of AO-FIO image quality. A set of operators are tested on data sets acquired at different focal depths and different retinal locations from healthy volunteers. Our results demonstrate differences in focus measure operator performance in quantifying AO-FIO image quality. Further, we discuss the potential application of the selected focus operators in (i) selection of the best quality AO-FIO image from a series of images collected at the same retinal location and (ii) assessment of longitudinal changes in the diseased retina. Focus function could be incorporated into real-time AO-FIO image processing and provide an initial automated quality assessment during image acquisition or reading center grading.

  20. Hard exudates segmentation based on learned initial seeds and iterative graph cut.

    PubMed

    Kusakunniran, Worapan; Wu, Qiang; Ritthipravat, Panrasee; Zhang, Jian

    2018-05-01

    (Background and Objective): The occurrence of hard exudates is one of the early signs of diabetic retinopathy which is one of the leading causes of the blindness. Many patients with diabetic retinopathy lose their vision because of the late detection of the disease. Thus, this paper is to propose a novel method of hard exudates segmentation in retinal images in an automatic way. (Methods): The existing methods are based on either supervised or unsupervised learning techniques. In addition, the learned segmentation models may often cause miss-detection and/or fault-detection of hard exudates, due to the lack of rich characteristics, the intra-variations, and the similarity with other components in the retinal image. Thus, in this paper, the supervised learning based on the multilayer perceptron (MLP) is only used to identify initial seeds with high confidences to be hard exudates. Then, the segmentation is finalized by unsupervised learning based on the iterative graph cut (GC) using clusters of initial seeds. Also, in order to reduce color intra-variations of hard exudates in different retinal images, the color transfer (CT) is applied to normalize their color information, in the pre-processing step. (Results): The experiments and comparisons with the other existing methods are based on the two well-known datasets, e_ophtha EX and DIARETDB1. It can be seen that the proposed method outperforms the other existing methods in the literature, with the sensitivity in the pixel-level of 0.891 for the DIARETDB1 dataset and 0.564 for the e_ophtha EX dataset. The cross datasets validation where the training process is performed on one dataset and the testing process is performed on another dataset is also evaluated in this paper, in order to illustrate the robustness of the proposed method. (Conclusions): This newly proposed method integrates the supervised learning and unsupervised learning based techniques. It achieves the improved performance, when compared with the existing methods in the literature. The robustness of the proposed method for the scenario of cross datasets could enhance its practical usage. That is, the trained model could be more practical for unseen data in the real-world situation, especially when the capturing environments of training and testing images are not the same. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2011-06-01

    With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.

  2. Non-contact image processing for gin trash sensors in stripper harvested cotton with burr and fine trash correction

    USDA-ARS?s Scientific Manuscript database

    This study was initiated to provide the basis for obtaining online information as to the levels of the various types of gin trash. The objective is to provide the ginner with knowledge of the quantity of the various trash components in the raw uncleaned seed cotton. This information is currently not...

  3. Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series

    NASA Astrophysics Data System (ADS)

    Champion, Nicolas

    2016-06-01

    Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.

  4. Development of an improved CAD scheme for automated detection of lung nodules in digital chest images.

    PubMed

    Xu, X W; Doi, K; Kobayashi, T; MacMahon, H; Giger, M L

    1997-09-01

    Lung cancer is the leading cause of cancer deaths in men and women in the United States, with a 5-year survival rate of only about 13%. However, this survival rate can be improved to 47% if the disease is diagnosed and treated at an early stage. In this study, we developed an improved computer-aided diagnosis (CAD) scheme for the automated detection of lung nodules in digital chest images to assist radiologists, who could miss up to 30% of the actually positive cases in their daily practice. Two hundred PA chest radiographs, 100 normals and 100 abnormals, were used as the database for our study. The presence of nodules in the 100 abnormal cases was confirmed by two experienced radiologists on the basis of CT scans or radiographic follow-up. In our CAD scheme, nodule candidates were selected initially by multiple gray-level thresholding of the difference image (which corresponds to the subtraction of a signal-enhanced image and a signal-suppressed image) and then classified into six groups. A large number of false positives were eliminated by adaptive rule-based tests and an artificial neural network (ANN). The CAD scheme achieved, on average, a sensitivity of 70% with 1.7 false positives per chest image, a performance which was substantially better as compared with other studies. The CPU time for the processing of one chest image was about 20 seconds on an IBM RISC/6000 Powerstation 590. We believe that the CAD scheme with the current performance is ready for initial clinical evaluation.

  5. High-resolution noise substitution to measure overfitting and validate resolution in 3D structure determination by single particle electron cryomicroscopy.

    PubMed

    Chen, Shaoxia; McMullan, Greg; Faruqi, Abdul R; Murshudov, Garib N; Short, Judith M; Scheres, Sjors H W; Henderson, Richard

    2013-12-01

    Three-dimensional (3D) structure determination by single particle electron cryomicroscopy (cryoEM) involves the calculation of an initial 3D model, followed by extensive iterative improvement of the orientation determination of the individual particle images and the resulting 3D map. Because there is much more noise than signal at high resolution in the images, this creates the possibility of noise reinforcement in the 3D map, which can give a false impression of the resolution attained. The balance between signal and noise in the final map at its limiting resolution depends on the image processing procedure and is not easily predicted. There is a growing awareness in the cryoEM community of how to avoid such over-fitting and over-estimation of resolution. Equally, there has been a reluctance to use the two principal methods of avoidance because they give lower resolution estimates, which some people believe are too pessimistic. Here we describe a simple test that is compatible with any image processing protocol. The test allows measurement of the amount of signal and the amount of noise from overfitting that is present in the final 3D map. We have applied the method to two different sets of cryoEM images of the enzyme beta-galactosidase using several image processing packages. Our procedure involves substituting the Fourier components of the initial particle image stack beyond a chosen resolution by either the Fourier components from an adjacent area of background, or by simple randomisation of the phases of the particle structure factors. This substituted noise thus has the same spectral power distribution as the original data. Comparison of the Fourier Shell Correlation (FSC) plots from the 3D map obtained using the experimental data with that from the same data with high-resolution noise (HR-noise) substituted allows an unambiguous measurement of the amount of overfitting and an accompanying resolution assessment. A simple formula can be used to calculate an unbiased FSC from the two curves, even when a substantial amount of overfitting is present. The approach is software independent. The user is therefore completely free to use any established method or novel combination of methods, provided the HR-noise test is carried out in parallel. Applying this procedure to cryoEM images of beta-galactosidase shows how overfitting varies greatly depending on the procedure, but in the best case shows no overfitting and a resolution of ~6 Å. (382 words). © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Collection of sequential imaging events for research in breast cancer screening

    NASA Astrophysics Data System (ADS)

    Patel, M. N.; Young, K.; Halling-Brown, M. D.

    2016-03-01

    Due to the huge amount of research involving medical images, there is a widely accepted need for comprehensive collections of medical images to be made available for research. This demand led to the design and implementation of a flexible image repository, which retrospectively collects images and data from multiple sites throughout the UK. The OPTIMAM Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, annotations and expert-determined ground truths. Collection has been ongoing for over three years, providing the opportunity to collect sequential imaging events. Extensive alterations to the identification, collection, processing and storage arms of the system have been undertaken to support the introduction of sequential events, including interval cancers. These updates to the collection systems allow the acquisition of many more images, but more importantly, allow one to build on the existing high-dimensional data stored in the OMI-DB. A research dataset of this scale, which includes original normal and subsequent malignant cases along with expert derived and clinical annotations, is currently unique. These data provide a powerful resource for future research and has initiated new research projects, amongst which, is the quantification of normal cases by applying a large number of quantitative imaging features, with a priori knowledge that eventually these cases develop a malignancy. This paper describes, extensions to the OMI-DB collection systems and tools and discusses the prospective applications of having such a rich dataset for future research applications.

  7. Initiation process of a thrust fault revealed by analog experiments

    NASA Astrophysics Data System (ADS)

    Yamada, Yasuhiro; Dotare, Tatsuya; Adam, Juergen; Hori, Takane; Sakaguchi, Hide

    2016-04-01

    We conducted 2D (cross-sectional) analog experiments with dry sand using a high resolution digital image correlation (DIC) technique to reveal initiation process of a thrust fault in detail, and identified a number of "weak shear bands" and minor uplift prior to the thrust initiation. The observations suggest that the process can be divided into three stages. Stage 1: characterized by a series of abrupt and short-lived weak shear bands at the location where the thrust will be generated later. Before initiation of the fault, the area to be the hanging wall starts to uplift. Stage 2: defined by the generation of the new thrust and its active displacement. The location of the new thrust seems to be constrained by its associated back-thrust, produced at the foot of the surface slope (by the previous thrust). The activity of the previous thrust turns to zero once the new thrust is generated, but the timing of these two events is not the same. Stage 3: characterized by a constant displacement along the (new) thrust. Similar minor shear bands can be seen in the toe area of the Nankai accretionary prism, SW Japan and we can correlate the along-strike variations in seismic profiles to the model results that show the characteristic features in each thrust development stage.

  8. Phase-contrast x-ray imaging of microstructure and fatigue-crack propagation in single-crystal nickel-base superalloys

    NASA Astrophysics Data System (ADS)

    Husseini, Naji Sami

    Single-crystal nickel-base superalloys are ubiquitous in demanding turbine-blade applications, and they owe their remarkable resilience to their dendritic, hierarchical microstructure and complex composition. During normal operations, they endure rapid low-stress vibrations that may initiate fatigue cracks. This failure mode in the very high-cycle regime is poorly understood, in part due to inadequate testing and diagnostic equipment. Phase-contrast imaging with coherent synchrotron x rays, however, is an emergent technique ideally suited for dynamic processes such as crack initiation and propagation. A specially designed portable ultrasonic-fatigue apparatus, coupled with x-ray radiography, allows real-time, in situ imaging while simulating service conditions. Three contrast mechanisms - absorption, diffraction, and phase contrast - span the immense breadth of microstructural features in superalloys. Absorption contrast is sensitive to composition and crack displacements, and diffraction contrast illuminates dislocation aggregates and crystallographic misorientations. Phase contrast enhances electron-density gradients and is particularly useful for fatigue-crack studies, sensitive to internal crack tips and openings less than one micrometer. Superalloy samples were imaged without external stresses to study microstructure and mosaicity. Maps of rhenium and tungsten concentrations revealed strong segregation to the center of dendrites, as manifested by absorption contrast. Though nominally single crystals, dendrites were misoriented from the bulk by a few degrees, as revealed by diffraction contrast. For dynamic studies of cyclic fatigue, superalloys were mounted in the portable ultrasonic-fatigue apparatus, subjected to a mean tensile stress of ˜50-150 MPa, and cycled in tension to initiate and propagate fatigue cracks. Radiographs were recorded every thousand cycles over the multimillion-cycle lifetime to measure micron-scale crack growth. Crack openings were very small, as determined by absorption and phase contrast, and suggested multiple fracture modes for propagation along {111} planes at room temperature, which was verified by finite element analysis. With increasing temperature, cracks became Mode I (perpendicular to the loading axis) in character and more sensitive to the microstructure. Advancing plastic zones ahead of crack tips altered the crystallographic quality, from which diffraction contrast anticipated initiation and propagation. These studies demonstrate the extreme sensitivity of x-ray radiography for detailed studies of superalloys and crack growth processes.

  9. Strategic Review Process for an Accountable Care Organization and Emerging Accountable Care Best Practices.

    PubMed

    Conway, Sarah J; Himmelrich, Sarah; Feeser, Scott A; Flynn, John A; Kravet, Steven J; Bailey, Jennifer; Hebert, Lindsay C; Donovan, Susan H; Kachur, Sarah G; Brown, Patricia M C; Baumgartner, William A; Berkowitz, Scott A

    2018-02-02

    Accountable Care Organizations (ACOs), like other care entities, must be strategic about which initiatives they support in the quest for higher value. This article reviews the current strategic planning process for the Johns Hopkins Medicine Alliance for Patients (JMAP), a Medicare Shared Savings Program Track 1 ACO. It reviews the 3 focus areas for the 2017 strategic review process - (1) optimizing care coordination for complex, at-risk patients, (2) post-acute care, and (3) specialty care integration - reviewing cost savings and quality improvement opportunities, associated best practices from the literature, and opportunities to leverage and advance existing ACO and health system efforts in each area. It then reviews the ultimate selection of priorities for the coming year and early thoughts on implementation. After the robust review process, key stakeholders voted to select interventions targeted at care coordination, post-acute care, and specialty integration including Part B drug and imaging costs. The interventions selected incorporate a mixture of enhancing current ACO initiatives, working collaboratively and synergistically on other health system initiatives, and taking on new projects deemed targeted, cost-effective, and manageable in scope. The annual strategic review has been an essential and iterative process based on performance data and informed by the collective experience of other organizations. The process allows for an evidence-based strategic plan for the ACO in pursuit of the best care for patients.

  10. A Preliminary Comparison of Three Dimensional Particle Tracking and Sizing using Plenoptic Imaging and Digital In-line Holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guildenbecher, Daniel Robert; Munz, Elise Dahnke; Farias, Paul Abraham

    2015-12-01

    Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-componentmore » velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.« less

  11. A Preliminary Comparison of Three Dimensional Particle Tracking and Sizing using Plenoptic Imaging and Digital In-line Holography [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guildenbecher, Daniel Robert; Munz, Elise Dahnke; Farias, Paul Abraham

    2015-12-01

    Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-componentmore » velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.« less

  12. Convolutional neural network-based classification system design with compressed wireless sensor network images.

    PubMed

    Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil

    2018-01-01

    With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.

  13. Implementation of Remote 3-Dimensional Image Guided Radiation Therapy Quality Assurance for Radiation Therapy Oncology Group Clinical Trials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui Yunfeng; Galvin, James M.; Radiation Therapy Oncology Group, American College of Radiology, Philadelphia, Pennsylvania

    2013-01-01

    Purpose: To report the process and initial experience of remote credentialing of three-dimensional (3D) image guided radiation therapy (IGRT) as part of the quality assurance (QA) of submitted data for Radiation Therapy Oncology Group (RTOG) clinical trials; and to identify major issues resulting from this process and analyze the review results on patient positioning shifts. Methods and Materials: Image guided radiation therapy datasets including in-room positioning CT scans and daily shifts applied were submitted through the Image Guided Therapy QA Center from institutions for the IGRT credentialing process, as required by various RTOG trials. A centralized virtual environment is establishedmore » at the RTOG Core Laboratory, containing analysis tools and database infrastructure for remote review by the Physics Principal Investigators of each protocol. The appropriateness of IGRT technique and volumetric image registration accuracy were evaluated. Registration accuracy was verified by repeat registration with a third-party registration software system. With the accumulated review results, registration differences between those obtained by the Physics Principal Investigators and from the institutions were analyzed for different imaging sites, shift directions, and imaging modalities. Results: The remote review process was successfully carried out for 87 3D cases (out of 137 total cases, including 2-dimensional and 3D) during 2010. Frequent errors in submitted IGRT data and challenges in the review of image registration for some special cases were identified. Workarounds for these issues were developed. The average differences of registration results between reviewers and institutions ranged between 2 mm and 3 mm. Large discrepancies in the superior-inferior direction were found for megavoltage CT cases, owing to low spatial resolution in this direction for most megavoltage CT cases. Conclusion: This first experience indicated that remote review for 3D IGRT as part of QA for RTOG clinical trials is feasible and effective. The magnitude of registration discrepancy between institution and reviewer was presented, and the major issues were investigated to further improve this remote evaluation process.« less

  14. State selectivity and dynamics in dissociative electron attachment to CF₃I revealed through velocity slice imaging.

    PubMed

    Ómarsson, Frímann H; Mason, Nigel J; Krishnakumar, E; Ingólfsson, Oddur

    2014-11-03

    In light of its substantially more environmentally friendly nature, CF3I is currently being considered as a replacement for the highly potent global-warming gas CF4, which is used extensively in plasma processing. In this context, we have studied the electron-driven dissociation of CF3I to form CF3(-) and I, and we compare this process to the corresponding photolysis channel. By using the velocity slice imaging (VSI) technique we can visualize the complete dynamics of this process and show that electron-driven dissociation proceeds from the same initial parent state as the corresponding photolysis process. However, in contrast to photolysis, which leads nearly exclusively to the (2)P(1/2) excited state of iodine, electron-induced dissociation leads predominantly to the (2)P(3/2) ground state. We believe that the changed spin state of the negative ion allows an adiabatic dissociation through a conical intersection, whereas this path is efficiently repressed by a required spin flip in the photolysis process. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Second cancers discovered by (18)FDG PET/CT imaging for choroidal melanoma.

    PubMed

    Chin, Kimberly; Finger, Paul T; Kurli, Madhavi; Tena, Lawrence B; Reddy, Shantan

    2007-08-01

    Positron-emission tomography/computed tomography (PET/CT) is a unique imaging tool that aids in the detection of cancerous lesions. It is currently and widely used for cancer staging (both initial and follow-up). Here we report our findings of second primary cancers incidentally discovered during PET/CT staging of patients with choroidal melanomas. We performed a retrospective case review of 139 patients with uveal melanoma who were subsequently evaluated by whole-body [18-fluorine-labeled] 2-deoxy-2-fluoro-D-glucose ((18)FDG) PET/CT imaging. In this series, 93 were scanned before treatment and 46 during the course of their follow-up systemic examinations. Their mean follow-up was 50.9 months. Six patients (4.3%) had second primary cancers revealed by PET/CT imaging. Three patients (50%) were synchronous (found at initial staging), and the remaining 3 patients (50%) were metachronous (found at follow-up staging). Second primary cancers were found in the lung, breast, uterus, colon, and thyroid. Although whole-body PET/CT scans were ordered as part of the staging process of patients with diagnosed choroidal melanoma, both synchronous and metachronous second primary cancers were found. PET/CT has become an indispensable tool for staging, diagnosis, and treatment planning for choroidal melanoma. The possibility of detecting second primary cancers should also be considered valuable.

  16. Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme

    NASA Astrophysics Data System (ADS)

    Hsin, Cheng-Ho; Inigo, Rafael M.

    1990-03-01

    The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.

  17. Toward a Framework for Benefit-Risk Assessment in Diagnostic Imaging: Identifying Scenario-specific Criteria.

    PubMed

    Agapova, Maria; Bresnahan, Brian W; Linnau, Ken F; Garrison, Louis P; Higashi, Mitchell; Kessler, Larry; Devine, Beth

    2017-05-01

    Diagnostic imaging has many effects and there is no common definition of value in diagnostic radiology. As benefit-risk trade-offs are rarely made explicit, it is not clear which framework is used in clinical guideline development. We describe initial steps toward the creation of a benefit-risk framework for diagnostic radiology. We performed a literature search and an online survey of physicians to identify and collect benefit-risk criteria (BRC) relevant to diagnostic imaging tests. We operationalized a process for selection of BRC with the use of four clinical use case scenarios that vary by diagnostic alternatives and clinical indication. Respondent BRC selections were compared across clinical scenarios and between radiologists and nonradiologists. Thirty-six BRC were identified and organized into three domains: (1) those that account for differences attributable only to the test or device (n = 17); (2) those that account for clinical management and provider experiences (n = 12); and (3) those that capture patient experience (n = 7). Forty-eight survey participants selected 22 criteria from the initial list in the survey (9-11 per case). Engaging ordering physicians increased the number of criteria selected in each of the four clinical scenarios presented. We developed a process for standardizing selection of BRC in guideline development. These results suggest that a process relying on elements of comparative effectiveness and the use of standardized BRC may ensure consistent examination of differences among alternatives by way of making explicit implicit trade-offs that otherwise enter the decision-making space and detract from consistency and transparency. These findings also highlight the need for multidisciplinary teams that include input from ordering physicians. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  18. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.

  19. Split-screen display system and standardized methods for ultrasound image acquisition and multi-frame data processing

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2011-01-01

    A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.

  20. An efficient and secure partial image encryption for wireless multimedia sensor networks using discrete wavelet transform, chaotic maps and substitution box

    NASA Astrophysics Data System (ADS)

    Khan, Muazzam A.; Ahmad, Jawad; Javaid, Qaisar; Saqib, Nazar A.

    2017-03-01

    Wireless Sensor Networks (WSN) is widely deployed in monitoring of some physical activity and/or environmental conditions. Data gathered from WSN is transmitted via network to a central location for further processing. Numerous applications of WSN can be found in smart homes, intelligent buildings, health care, energy efficient smart grids and industrial control systems. In recent years, computer scientists has focused towards findings more applications of WSN in multimedia technologies, i.e. audio, video and digital images. Due to bulky nature of multimedia data, WSN process a large volume of multimedia data which significantly increases computational complexity and hence reduces battery time. With respect to battery life constraints, image compression in addition with secure transmission over a wide ranged sensor network is an emerging and challenging task in Wireless Multimedia Sensor Networks. Due to the open nature of the Internet, transmission of data must be secure through a process known as encryption. As a result, there is an intensive demand for such schemes that is energy efficient as well as highly secure since decades. In this paper, discrete wavelet-based partial image encryption scheme using hashing algorithm, chaotic maps and Hussain's S-Box is reported. The plaintext image is compressed via discrete wavelet transform and then the image is shuffled column-wise and row wise-wise via Piece-wise Linear Chaotic Map (PWLCM) and Nonlinear Chaotic Algorithm, respectively. To get higher security, initial conditions for PWLCM are made dependent on hash function. The permuted image is bitwise XORed with random matrix generated from Intertwining Logistic map. To enhance the security further, final ciphertext is obtained after substituting all elements with Hussain's substitution box. Experimental and statistical results confirm the strength of the anticipated scheme.

Top