Sample records for source image processing

  1. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Image change detection systems, methods, and articles of manufacture

    DOEpatents

    Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.

    2010-01-05

    Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.

  3. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  4. Single-random-phase holographic encryption of images

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.

    2017-02-01

    In this paper, a method is proposed for encrypting an optical image onto a phase-only hologram, utilizing a single random phase mask as the private encryption key. The encryption process can be divided into 3 stages. First the source image to be encrypted is scaled in size, and pasted onto an arbitrary position in a larger global image. The remaining areas of the global image that are not occupied by the source image could be filled with randomly generated contents. As such, the global image as a whole is very different from the source image, but at the same time the visual quality of the source image is preserved. Second, a digital Fresnel hologram is generated from the new image, and converted into a phase-only hologram based on bi-directional error diffusion. In the final stage, a fixed random phase mask is added to the phase-only hologram as the private encryption key. In the decryption process, the global image together with the source image it contained, can be reconstructed from the phase-only hologram if it is overlaid with the correct decryption key. The proposed method is highly resistant to different forms of Plain-Text-Attacks, which are commonly used to deduce the encryption key in existing holographic encryption process. In addition, both the encryption and the decryption processes are simple and easy to implement.

  5. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  6. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  7. White-Light Optical Information Processing and Holography.

    DTIC Science & Technology

    1983-05-03

    Processing, White-Light Holography, Image Subtraction, Image Deblurring , Coherence Requirement, Apparent Transfer Function, Source Encoding, Signal...in this period, also demonstrated several color image processing capabilities. Among those are broadband color image deblurring and color image...Broadband Image Deblurring ..... ......... 6 2.5 Color Image Subtraction ............... 7 2.6 Rainbow Holographic Aberrations . . ..... 7 2.7

  8. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    PubMed

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  9. Detecting Moving Sources in Astronomical Images (Abstract)

    NASA Astrophysics Data System (ADS)

    Block, A.

    2018-06-01

    (Abstract only) Source detection in images is an important part of analyzing astronomical data. This project discusses an implementation of image detection in python, as well as processes for performing photometry in python. Application of these tools to looking for moving sources is also discussed.

  10. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  11. Caltech/JPL Conference on Image Processing Technology, Data Sources and Software for Commercial and Scientific Applications

    NASA Technical Reports Server (NTRS)

    Redmann, G. H.

    1976-01-01

    Recent advances in image processing and new applications are presented to the user community to stimulate the development and transfer of this technology to industrial and commercial applications. The Proceedings contains 37 papers and abstracts, including many illustrations (some in color) and provides a single reference source for the user community regarding the ordering and obtaining of NASA-developed image-processing software and science data.

  12. Multifocus watermarking approach based on discrete cosine transform.

    PubMed

    Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila

    2016-05-01

    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.

  13. Resiliency of the Multiscale Retinex Image Enhancement Algorithm

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.

    1998-01-01

    The multiscale retinex with color restoration (MSRCR) continues to prove itself in extensive testing to be very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition, However, issues remain with regard to the resiliency of the MSRCR to different image sources and arbitrary image manipulations which may have been applied prior to retinex processing. In this paper we define these areas of concern, provide experimental results, and, examine the effects of commonly occurring image manipulation on retinex performance. In virtually all cases the MSRCR is highly resilient to the effects of both the image source variations and commonly encountered prior image-processing. Significant artifacts are primarily observed for the case of selective color channel clipping in large dark zones in a image. These issues are of concerning the processing of digital image archives and other applications where there is neither control over the image acquisition process, nor knowledge about any processing done on th data beforehand.

  14. ImagingSIMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-11-06

    ImagingSIMS is an open source application for loading, processing, manipulating and visualizing secondary ion mass spectrometry (SIMS) data. At PNNL, a separate branch has been further developed to incorporate application specific features for dynamic SIMS data sets. These include loading CAMECA IMS-1280, NanoSIMS and modified IMS-4f raw data, creating isotopic ratio images and stitching together images from adjacent interrogation regions. In addition to other modifications of the parent open source version, this version is equipped with a point-by-point image registration tool to assist with streamlining the image fusion process.

  15. Photo-reconnaissance applications of computer processing of images.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1971-01-01

    An imaging processing technique is developed for enhancement and calibration of imaging experiments. The technique is shown to be useful not only for the original application but also when applied to images from a wide variety of sources.

  16. Advantages and Disadvantages in Image Processing with Free Software in Radiology.

    PubMed

    Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan

    2018-01-15

    Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.

  17. Imagery encoding and false recognition errors: Examining the role of imagery process and imagery content on source misattributions.

    PubMed

    Foley, Mary Ann; Foy, Jeffrey; Schlemmer, Emily; Belser-Ehrlich, Janna

    2010-11-01

    Imagery encoding effects on source-monitoring errors were explored using the Deese-Roediger-McDermott paradigm in two experiments. While viewing thematically related lists embedded in mixed picture/word presentations, participants were asked to generate images of objects or words (Experiment 1) or to simply name the items (Experiment 2). An encoding task intended to induce spontaneous images served as a control for the explicit imagery instruction conditions (Experiment 1). On the picture/word source-monitoring tests, participants were much more likely to report "seeing" a picture of an item presented as a word than the converse particularly when images were induced spontaneously. However, this picture misattribution error was reversed after generating images of words (Experiment 1) and was eliminated after simply labelling the items (Experiment 2). Thus source misattributions were sensitive to the processes giving rise to imagery experiences (spontaneous vs deliberate), the kinds of images generated (object vs word images), and the ways in which materials were presented (as pictures vs words).

  18. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  19. APT: Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ

    2012-08-01

    Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It has a graphical user interface (GUI) which allows the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. Mouse-clicking on a source in the displayed image draws a circular or elliptical aperture and sky annulus around the source and computes the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs, including image histogram, and aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has functions for customizing calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.

  20. Youpi: YOUr processing PIpeline

    NASA Astrophysics Data System (ADS)

    Monnerville, Mathias; Sémah, Gregory

    2012-03-01

    Youpi is a portable, easy to use web application providing high level functionalities to perform data reduction on scientific FITS images. Built on top of various open source reduction tools released to the community by TERAPIX (http://terapix.iap.fr), Youpi can help organize data, manage processing jobs on a computer cluster in real time (using Condor) and facilitate teamwork by allowing fine-grain sharing of results and data. Youpi is modular and comes with plugins which perform, from within a browser, various processing tasks such as evaluating the quality of incoming images (using the QualityFITS software package), computing astrometric and photometric solutions (using SCAMP), resampling and co-adding FITS images (using SWarp) and extracting sources and building source catalogues from astronomical images (using SExtractor). Youpi is useful for small to medium-sized data reduction projects; it is free and is published under the GNU General Public License.

  1. The Hubble Legacy Archive: Data Processing in the Era of AstroDrizzle

    NASA Astrophysics Data System (ADS)

    Strolger, Louis-Gregory; Hubble Legacy Archive Team, The Hubble Source Catalog Team

    2015-01-01

    The Hubble Legacy Archive (HLA) expands the utility of Hubble Space Telescope wide-field imaging data by providing high-level composite images and source lists, perusable and immediately available online. The latest HLA data release (DR8.0) marks a fundamental change in how these image combinations are produced, using DrizzlePac tools and Astrodrizzle to reduce geometric distortion and provide improved source catalogs for all publicly available data. We detail the HLA data processing and source list schemas, what products are newly updated and available for WFC3 and ACS, and how these data products are further utilized in the production of the Hubble Source Catalog. We also discuss plans for future development, including updates to WFPC2 products and field mosaics.

  2. SPARX, a new environment for Cryo-EM image processing.

    PubMed

    Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

    2007-01-01

    SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.

  3. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    PubMed

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  4. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    PubMed Central

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  5. Semi-automated Image Processing for Preclinical Bioluminescent Imaging.

    PubMed

    Slavine, Nikolai V; McColl, Roderick W

    Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.

  6. Image Fusion Algorithms Using Human Visual System in Transform Domain

    NASA Astrophysics Data System (ADS)

    Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar

    2017-08-01

    The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.

  7. White-Light Optical Information Processing and Holography.

    DTIC Science & Technology

    1982-05-03

    artifact noise . I. wever, the deblurring spatial filter that we used were a narrow spectral band centered at 5154A green light. To compensate for the scaling...Processing, White-Light 11olographyv, Image Profcessing, Optical Signal Process inI, Image Subtraction, Image Deblurring . 70. A S’ R ACT (Continua on crow ad...optical processing technique, we had shown that the incoherent source techniques provides better image quality, and very low coherent artifact noise

  8. Raspberry Pi-powered imaging for plant phenotyping.

    PubMed

    Tovar, Jose C; Hoyer, J Steen; Lin, Andy; Tielking, Allison; Callen, Steven T; Elizabeth Castillo, S; Miller, Michael; Tessman, Monica; Fahlgren, Noah; Carrington, James C; Nusinow, Dmitri A; Gehan, Malia A

    2018-03-01

    Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data. We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi-controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (e.g., shape, area, height, color) en masse using open-source image processing software such as PlantCV. This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.

  9. Nonuniformity correction of imaging systems with a spatially nonhomogeneous radiation source.

    PubMed

    Gutschwager, Berndt; Hollandt, Jörg

    2015-12-20

    We present a novel method of nonuniformity correction of imaging systems in a wide optical spectral range by applying a radiation source with an unknown and spatially nonhomogeneous radiance or radiance temperature distribution. The benefit of this method is that it can be applied with radiation sources of arbitrary spatial radiance or radiance temperature distribution and only requires the sufficient temporal stability of this distribution during the measurement process. The method is based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogenous radiance distribution and a thermal imager of a predefined nonuniform focal plane array responsivity is presented.

  10. Sparse Representation for Color Image Restoration (PREPRINT)

    DTIC Science & Technology

    2006-10-01

    as a universal denoiser of images, which learns the posterior from the given image in a way inspired by the Lempel - Ziv universal compression ...such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data . In...describe the data source. Such a model becomes paramount when developing algorithms for processing these signals. In this context, Markov-Random-Field

  11. Mirion--a software package for automatic processing of mass spectrometric images.

    PubMed

    Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B

    2013-08-01

    Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.

  12. Image matrix processor for fast multi-dimensional computations

    DOEpatents

    Roberson, G.P.; Skeate, M.F.

    1996-10-15

    An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

  13. Defocusing effects of lensless ghost imaging and ghost diffraction with partially coherent sources

    NASA Astrophysics Data System (ADS)

    Zhou, Shuang-Xi; Sheng, Wei; Bi, Yu-Bo; Luo, Chun-Ling

    2018-04-01

    The defocusing effect is inevitable and degrades the image quality in the conventional optical imaging process significantly due to the close confinement of the imaging lens. Based on classical optical coherent theory and linear algebra, we develop a unified formula to describe the defocusing effects of both lensless ghost imaging (LGI) and lensless ghost diffraction (LGD) systems with a partially coherent source. Numerical examples are given to illustrate the influence of defocusing length on the quality of LGI and LGD. We find that the defocusing effects of the test and reference paths in the LGI or LGD systems are entirely different, while the LGD system is more robust against defocusing than the LGI system. Specifically, we find that the imaging process for LGD systems can be viewed as pinhole imaging, which may find applications in ultra-short-wave band imaging without imaging lenses, e.g. x-ray diffraction and γ-ray imaging.

  14. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  15. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    PubMed

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  16. System and method for bullet tracking and shooter localization

    DOEpatents

    Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA

    2011-06-21

    A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.

  17. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  18. Center determination for trailed sources in astronomical observation images

    NASA Astrophysics Data System (ADS)

    Du, Jun Ju; Hu, Shao Ming; Chen, Xu; Guo, Di Fu

    2014-11-01

    Images with trailed sources can be obtained when observing near-Earth objects, such as small astroids, space debris, major planets and their satellites, no matter the telescopes track on sidereal speed or the speed of target. The low centering accuracy of these trailed sources is one of the most important sources of the astrometric uncertainty, but how to determine the central positions of the trailed sources accurately remains a significant challenge to image processing techniques, especially in the study of faint or fast moving objects. According to the conditions of one-meter telescope at Weihai Observatory of Shandong University, moment and point-spread-function (PSF) fitting were chosen to develop the image processing pipeline for space debris. The principles and the implementations of both two methods are introduced in this paper. And some simulated images containing trailed sources are analyzed with each technique. The results show that two methods are comparable to obtain the accurate central positions of trailed sources when the signal to noise (SNR) is high. But moment tends to fail for the objects with low SNR. Compared with moment, PSF fitting seems to be more robust and versatile. However, PSF fitting is quite time-consuming. Therefore, if there are enough bright stars in the field, or the high astronometric accuracy is not necessary, moment is competent. Otherwise, the combination of moment and PSF fitting is recommended.

  19. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  20. IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java

    PubMed Central

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319

  1. IQM: an extensible and portable open source application for image and signal analysis in Java.

    PubMed

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  2. Feature extraction from multiple data sources using genetic programming

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Brumby, Steven P.; Pope, Paul A.; Eads, Damian R.; Esch-Mosher, Diana M.; Galassi, Mark C.; Harvey, Neal R.; McCulloch, Hersey D.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Bloch, Jeffrey J.; David, Nancy A.

    2002-08-01

    Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  3. Method for localizing and isolating an errant process step

    DOEpatents

    Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.

    2003-01-01

    A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.

  4. Iplt--image processing library and toolkit for the electron microscopy community.

    PubMed

    Philippsen, Ansgar; Schenk, Andreas D; Stahlberg, Henning; Engel, Andreas

    2003-01-01

    We present the foundation for establishing a modular, collaborative, integrated, open-source architecture for image processing of electron microscopy images, named iplt. It is designed around object oriented paradigms and implemented using the programming languages C++ and Python. In many aspects it deviates from classical image processing approaches. This paper intends to motivate developers within the community to participate in this on-going project. The iplt homepage can be found at http://www.iplt.org.

  5. Optimizing the design of vertical seismic profiling (VSP) for imaging fracture zones over hardrock basement geothermal environments

    NASA Astrophysics Data System (ADS)

    Reiser, Fabienne; Schmelzbach, Cedric; Maurer, Hansruedi; Greenhalgh, Stewart; Hellwig, Olaf

    2017-04-01

    A primary focus of geothermal seismic imaging is to map dipping faults and fracture zones that control rock permeability and fluid flow. Vertical seismic profiling (VSP) is therefore a most valuable means to image the immediate surroundings of an existing borehole to guide, for example, the placing of new boreholes to optimize production from known faults and fractures. We simulated 2D and 3D acoustic synthetic seismic data and processed it through to pre-stack depth migration to optimize VSP survey layouts for mapping moderately to steeply dipping fracture zones within possible basement geothermal reservoirs. Our VSP survey optimization procedure for sequentially selecting source locations to define the area where source points are best located for optimal imaging makes use of a cross-correlation statistic, by which a subset of migrated shot gathers is compared with a target or reference image from a comprehensive set of source gathers. In geothermal exploration at established sites, it is reasonable to assume that sufficient à priori information is available to construct such a target image. We generally obtained good results with a relatively small number of optimally chosen source positions distributed over an ideal source location area for different fracture zone scenarios (different dips, azimuths, and distances from the surveying borehole). Adding further sources outside the optimal source area did not necessarily improve the results, but rather resulted in image distortions. It was found that fracture zones located at borehole-receiver depths and laterally offset from the borehole by 300 m can be imaged reliably for a range of the different dips, but more source positions and large offsets between sources and the borehole are required for imaging steeply dipping interfaces. When such features cross-cut the borehole, they are particularly difficult to image. For fracture zones with different azimuths, 3D effects are observed. Far offset source positions contribute less to the image quality as fracture zone azimuth increases. Our optimization methodology is best suited for designing future field surveys with a favorable benefit-cost ratio in areas with significant à priori knowledge. Moreover, our optimization workflow is valuable for selecting useful subsets of acquired data for optimum target-oriented processing.

  6. X-ray tomographic image magnification process, system and apparatus therefor

    DOEpatents

    Kinney, J.H.; Bonse, U.K.; Johnson, Q.C.; Nichols, M.C.; Saroyan, R.A.; Massey, W.N.; Nusshardt, R.

    1993-09-14

    A computerized three-dimensional x-ray tomographic microscopy system is disclosed, comprising: (a) source means for providing a source of parallel x-ray beams, (b) staging means for staging and sequentially rotating a sample to be positioned in the path of the (c) x-ray image magnifier means positioned in the path of the beams downstream from the sample, (d) detecting means for detecting the beams after being passed through and magnified by the image magnifier means, and (e) computing means for analyzing values received from the detecting means, and converting the values into three-dimensional representations. Also disclosed is a process for magnifying an x-ray image, and apparatus therefor. 25 figures.

  7. X-ray tomographic image magnification process, system and apparatus therefor

    DOEpatents

    Kinney, John H.; Bonse, Ulrich K.; Johnson, Quintin C.; Nichols, Monte C.; Saroyan, Ralph A.; Massey, Warren N.; Nusshardt, Rudolph

    1993-01-01

    A computerized three-dimensional x-ray tomographic microscopy system is disclosed, comprising: a) source means for providing a source of parallel x-ray beams, b) staging means for staging and sequentially rotating a sample to be positioned in the path of the c) x-ray image magnifier means positioned in the path of the beams downstream from the sample, d) detecting means for detecting the beams after being passed through and magnified by the image magnifier means, and e) computing means for analyzing values received from the detecting means, and converting the values into three-dimensional representations. Also disclosed is a process for magnifying an x-ray image, and apparatus therefor.

  8. A technique for automatically extracting useful field of view and central field of view images.

    PubMed

    Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar

    2016-01-01

    It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.

  9. Blind source separation of ex-vivo aorta tissue multispectral images

    PubMed Central

    Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson

    2015-01-01

    Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method’s performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue. PMID:26137366

  10. Photo-reconnaissance applications of computer processing of images.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1972-01-01

    Discussion of imaging processing techniques for enhancement and calibration of Jet Propulsion Laboratory imaging experiment pictures returned from NASA space vehicles such as Ranger, Mariner and Surveyor. Particular attention is given to data transmission, resolution vs recognition, and color aspects of digital data processing. The effectiveness of these techniques in applications to images from a wide variety of sources is noted. It is anticipated that the use of computer processing for enhancement of imagery will increase with the improvement and cost reduction of these techniques in the future.

  11. LED-based endoscopic light source for spectral imaging

    NASA Astrophysics Data System (ADS)

    Browning, Craig M.; Mayes, Samuel; Favreau, Peter; Rich, Thomas C.; Leavesley, Silas J.

    2016-03-01

    Colorectal cancer is the United States 3rd leading cancer in death rates.1 The current screening for colorectal cancer is an endoscopic procedure using white light endoscopy (WLE). There are multiple new methods testing to replace WLE, for example narrow band imaging and autofluorescence imaging.2 However, these methods do not meet the need for a higher specificity or sensitivity. The goal for this project is to modify the presently used endoscope light source to house 16 narrow wavelength LEDs for spectral imaging in real time while increasing sensitivity and specificity. The process to do such was to take an Olympus CLK-4 light source, replace the light and electronics with 16 LEDs and new circuitry. This allows control of the power and intensity of the LEDs. This required a larger enclosure to house a bracket system for the solid light guide (lightpipe), three new circuit boards, a power source and National Instruments hardware/software for computer control. The results were a successfully designed retrofit with all the new features. The LED testing resulted in the ability to control each wavelength's intensity. The measured intensity over the voltage range will provide the information needed to couple the camera for imaging. Overall the project was successful; the modifications to the light source added the controllable LEDs. This brings the research one step closer to the main goal of spectral imaging for early detection of colorectal cancer. Future goals will be to connect the camera and test the imaging process.

  12. Microseismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-07-01

    At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  13. Medical Imaging System

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The MD Image System, a true-color image processing system that serves as a diagnostic aid and tool for storage and distribution of images, was developed by Medical Image Management Systems, Huntsville, AL, as a "spinoff from a spinoff." The original spinoff, Geostar 8800, developed by Crystal Image Technologies, Huntsville, incorporates advanced UNIX versions of ELAS (developed by NASA's Earth Resources Laboratory for analysis of Landsat images) for general purpose image processing. The MD Image System is an application of this technology to a medical system that aids in the diagnosis of cancer, and can accept, store and analyze images from other sources such as Magnetic Resonance Imaging.

  14. Infrared and visible image fusion with the target marked based on multi-resolution visual attention mechanisms

    NASA Astrophysics Data System (ADS)

    Huang, Yadong; Gao, Kun; Gong, Chen; Han, Lu; Guo, Yue

    2016-03-01

    During traditional multi-resolution infrared and visible image fusion processing, the low contrast ratio target may be weakened and become inconspicuous because of the opposite DN values in the source images. So a novel target pseudo-color enhanced image fusion algorithm based on the modified attention model and fast discrete curvelet transformation is proposed. The interesting target regions are extracted from source images by introducing the motion features gained from the modified attention model, and source images are performed the gray fusion via the rules based on physical characteristics of sensors in curvelet domain. The final fusion image is obtained by mapping extracted targets into the gray result with the proper pseudo-color instead. The experiments show that the algorithm can highlight dim targets effectively and improve SNR of fusion image.

  15. High power THz sources for nonlinear imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tekavec, Patrick F.; Kozlov, Vladimir G.

    2014-02-18

    Many biological and chemical compounds have unique absorption features in the THz (0.1 - 10 THz) region, making the use of THz waves attractive for imaging in defense, security, biomedical imaging, and monitoring of industrial processes. Unlike optical radiation, THz frequencies can pass through many substances such as paper, clothing, ceramic, etc. with little attenuation. The use of currently available THz systems is limited by lack of highpower, sources as well as sensitive detectors and detector arrays operating at room temperature. Here we present a novel, high power THz source based on intracavity downconverison of optical pulses. The source deliversmore » 6 ps pulses at 1.5 THz, with an average power of >300 μW and peak powers >450 mW. We propose an imaging method based on frequency upconverison that is ideally suited to use the narrow bandwidth and high peak powers produced by the source. By upconverting the THz image to the infrared, commercially available detectors can be used for real time imaging.« less

  16. High power THz sources for nonlinear imaging

    NASA Astrophysics Data System (ADS)

    Tekavec, Patrick F.; Kozlov, Vladimir G.

    2014-02-01

    Many biological and chemical compounds have unique absorption features in the THz (0.1 - 10 THz) region, making the use of THz waves attractive for imaging in defense, security, biomedical imaging, and monitoring of industrial processes. Unlike optical radiation, THz frequencies can pass through many substances such as paper, clothing, ceramic, etc. with little attenuation. The use of currently available THz systems is limited by lack of highpower, sources as well as sensitive detectors and detector arrays operating at room temperature. Here we present a novel, high power THz source based on intracavity downconverison of optical pulses. The source delivers 6 ps pulses at 1.5 THz, with an average power of >300 μW and peak powers >450 mW. We propose an imaging method based on frequency upconverison that is ideally suited to use the narrow bandwidth and high peak powers produced by the source. By upconverting the THz image to the infrared, commercially available detectors can be used for real time imaging.

  17. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  18. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    PubMed

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Rapid development of medical imaging tools with open-source libraries.

    PubMed

    Caban, Jesus J; Joshi, Alark; Nagy, Paul

    2007-11-01

    Rapid prototyping is an important element in researching new imaging analysis techniques and developing custom medical applications. In the last ten years, the open source community and the number of open source libraries and freely available frameworks for biomedical research have grown significantly. What they offer are now considered standards in medical image analysis, computer-aided diagnosis, and medical visualization. A cursory review of the peer-reviewed literature in imaging informatics (indeed, in almost any information technology-dependent scientific discipline) indicates the current reliance on open source libraries to accelerate development and validation of processes and techniques. In this survey paper, we review and compare a few of the most successful open source libraries and frameworks for medical application development. Our dual intentions are to provide evidence that these approaches already constitute a vital and essential part of medical image analysis, diagnosis, and visualization and to motivate the reader to use open source libraries and software for rapid prototyping of medical applications and tools.

  20. ART AND SCIENCE OF IMAGE MAPS.

    USGS Publications Warehouse

    Kidwell, Richard D.; McSweeney, Joseph A.

    1985-01-01

    The visual image of reflected light is influenced by the complex interplay of human color discrimination, spatial relationships, surface texture, and the spectral purity of light, dyes, and pigments. Scientific theories of image processing may not always achieve acceptable results as the variety of factors, some psychological, are in part, unpredictable. Tonal relationships that affect digital image processing and the transfer functions used to transform from the continuous-tone source image to a lithographic image, may be interpreted for an insight of where art and science fuse in the production process. The application of art and science in image map production at the U. S. Geological Survey is illustrated and discussed.

  1. High speed imaging of dynamic processes with a switched source x-ray CT system

    NASA Astrophysics Data System (ADS)

    Thompson, William M.; Lionheart, William R. B.; Morton, Edward J.; Cunningham, Mike; Luggar, Russell D.

    2015-05-01

    Conventional x-ray computed tomography (CT) scanners are limited in their scanning speed by the mechanical constraints of their rotating gantries and as such do not provide the necessary temporal resolution for imaging of fast-moving dynamic processes, such as moving fluid flows. The Real Time Tomography (RTT) system is a family of fast cone beam CT scanners which instead use multiple fixed discrete sources and complete rings of detectors in an offset geometry. We demonstrate the potential of this system for use in the imaging of such high speed dynamic processes and give results using simulated and real experimental data. The unusual scanning geometry results in some challenges in image reconstruction, which are overcome using algebraic iterative reconstruction techniques and explicit regularisation. Through the use of a simple temporal regularisation term and by optimising the source firing pattern, we show that temporal resolution of the system may be increased at the expense of spatial resolution, which may be advantageous in some situations. Results are given showing temporal resolution of approximately 500 µs with simulated data and 3 ms with real experimental data.

  2. Implementation of cost-effective diffuse light source mechanism to reduce specular reflection and halo effects for resistor-image processing

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Sheng; Wang, Jeng-Yau

    2015-09-01

    Light source plays a significant role to acquire a qualified image from objects for facilitating the image processing and pattern recognition. For objects possessing specular surface, the phenomena of reflection and halo appearing in the acquired image will increase the difficulty of information processing. Such a situation may be improved by the assistance of valuable diffuse light source. Consider reading resistor via computer vision, due to the resistor's specular reflective surface it will face with a severe non-uniform luminous intensity on image yielding a higher error rate in recognition without a well-controlled light source. A measurement system including mainly a digital microscope embedded in a replaceable diffuse cover, a ring-type LED embedded onto a small pad carrying a resistor for evaluation, and Arduino microcontrollers connected with PC, is presented in this paper. Several replaceable cost-effective diffuse covers made by paper bowl, cup and box inside pasted with white paper are presented for reducing specular reflection and halo effects and compared with a commercial diffuse some. The ring-type LED can be flexibly configured to be a full or partial lighting based on the application. For each self-made diffuse cover, a set of resistors with 4 or 5 color bands are captured via digital microscope for experiments. The signal-to-noise ratio from the segmented resistor-image is used for performance evaluation. The detected principal axis of resistor body is used for the partial LED configuration to further improve the lighting condition. Experimental results confirm that the proposed mechanism can not only evaluate the cost-effective diffuse light source but also be extended as an automatic recognition system for resistor reading.

  3. Automatic recognition of light source from color negative films using sorting classification techniques

    NASA Astrophysics Data System (ADS)

    Sanger, Demas S.; Haneishi, Hideaki; Miyake, Yoichi

    1995-08-01

    This paper proposed a simple and automatic method for recognizing the light sources from various color negative film brands by means of digital image processing. First, we stretched the image obtained from a negative based on the standardized scaling factors, then extracted the dominant color component among red, green, and blue components of the stretched image. The dominant color component became the discriminator for the recognition. The experimental results verified that any one of the three techniques could recognize the light source from negatives of any film brands and all brands greater than 93.2 and 96.6% correct recognitions, respectively. This method is significant for the automation of color quality control in color reproduction from color negative film in mass processing and printing machine.

  4. Embedded image processing engine using ARM cortex-M4 based STM32F407 microcontroller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samaiya, Devesh, E-mail: samaiya.devesh@gmail.com

    2014-10-06

    Due to advancement in low cost, easily available, yet powerful hardware and revolution in open source software, urge to make newer, more interactive machines and electronic systems have increased manifold among engineers. To make system more interactive, designers need easy to use sensor systems. Giving the boon of vision to machines was never easy, though it is not impossible these days; it is still not easy and expensive. This work presents a low cost, moderate performance and programmable Image processing engine. This Image processing engine is able to capture real time images, can store the images in the permanent storagemore » and can perform preprogrammed image processing operations on the captured images.« less

  5. Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ R.; Gorjian, Varoujan; Rebull, Luisa M.; Masci, Frank J.; Fowler, John W.; Helou, George; Kulkarni, Shrinivas R.; Law, Nicholas M.

    2012-07-01

    Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It is a graphical user interface (GUI) designed to allow the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. The finely tuned layout of the GUI, along with judicious use of color-coding and alerting, is intended to give maximal user utility and convenience. Simply mouse-clicking on a source in the displayed image will instantly draw a circular or elliptical aperture and sky annulus around the source and will compute the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs with just the push of a button, including image histogram, x and y aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has many functions for customizing the calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, which is accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.

  6. Patterning optimization for 55nm design rule DRAM/flash memory using production-ready customized illuminations

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Hsu, Michael; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Chen, Fung; Socha, Robert; Park, JungChul; Gronlund, Keith

    2005-11-01

    Illumination optimization, often combined with optical proximity corrections (OPC) to the mask, is becoming one of the critical components for a production-worthy lithography process for 55nm-node DRAM/Flash memory devices and beyond. At low-k1, e.g. k1<0.31, both resolution and imaging contrast can be severely limited by the current imaging tools while using the standard illumination sources. Illumination optimization is a process where the source shape is varied, in both profile and intensity distribution, to achieve enhancement in the final image contrast as compared to using the non-optimized sources. The optimization can be done efficiently for repetitive patterns such as DRAM/Flash memory cores. However, illumination optimization often produces source shapes that are "free-form" like and they can be too complex to be directly applicable for production and lack the necessary radial and annular symmetries desirable for the diffractive optical element (DOE) based illumination systems in today's leading lithography tools. As a result, post-optimization rendering and verification of the optimized source shape are often necessary to meet the production-ready or manufacturability requirements and ensure optimal performance gains. In this work, we describe our approach to the illumination optimization for k1<0.31 DRAM/Flash memory patterns, using an ASML XT:1400i at NA 0.93, where the all necessary manufacturability requirements are fully accounted for during the optimization. The imaging contrast in the resist is optimized in a reduced solution space constrained by the manufacturability requirements, which include minimum distance between poles, minimum opening pole angles, minimum ring width and minimum source filling factor in the sigma space. For additional performance gains, the intensity within the optimized source can vary in a gray-tone fashion (eight shades used in this work). Although this new optimization approach can sometimes produce closely spaced solutions as gauged by the NILS based metrics, we show that the optimal and production-ready source shape solution can be easily determined by comparing the best solutions to the "free-form" solution and more importantly, by their respective imaging fidelity and process latitude ranking. Imaging fidelity and process latitude simulations are performed to analyze the impact and sensitivity of the manufacturability requirements on pattern specific illumination optimizations using ASML XT:1400i and other latest imaging systems. Mask model based OPC (MOPC) is applied and optimized sequentially to ensure that the CD uniformity requirements are met.

  7. Zero source insertion technique to account for undersampling in GPR imaging

    DOEpatents

    Chambers, David H; Mast, Jeffrey E; Paglieroni, David W

    2014-02-25

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  8. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  9. ImgLib2--generic image processing in Java.

    PubMed

    Pietzsch, Tobias; Preibisch, Stephan; Tomancák, Pavel; Saalfeld, Stephan

    2012-11-15

    ImgLib2 is an open-source Java library for n-dimensional data representation and manipulation with focus on image processing. It aims at minimizing code duplication by cleanly separating pixel-algebra, data access and data representation in memory. Algorithms can be implemented for classes of pixel types and generic access patterns by which they become independent of the specific dimensionality, pixel type and data representation. ImgLib2 illustrates that an elegant high-level programming interface can be achieved without sacrificing performance. It provides efficient implementations of common data types, storage layouts and algorithms. It is the data model underlying ImageJ2, the KNIME Image Processing toolbox and an increasing number of Fiji-Plugins. ImgLib2 is licensed under BSD. Documentation and source code are available at http://imglib2.net and in a public repository at https://github.com/imagej/imglib. Supplementary data are available at Bioinformatics Online. saalfeld@mpi-cbg.de

  10. Apparatus for generating partially coherent radiation

    DOEpatents

    Naulleau, Patrick P.

    2004-09-28

    The effective coherence of an undulator beamline can be tailored to projection lithography requirements by using a simple single moving element and a simple stationary low-cost spherical mirror. The invention is particularly suited for use in an illuminator device for an optical image processing system requiring partially coherent illumination. The illuminator includes: (i) source of coherent or partially coherent radiation which has an intrinsic coherence that is higher than the desired coherence; (ii) a reflective surface that receives incident radiation from said source; (iii) means for moving the reflective surface through a desired range of angles in two dimensions wherein the rate of the motion is fast relative to integration time of said image processing system; and (iv) a condenser optic that re-images the moving reflective surface to the entrance plane of said image processing system, thereby, making the illumination spot in said entrance plane essentially stationary.

  11. Holographic illuminator for synchrotron-based projection lithography systems

    DOEpatents

    Naulleau, Patrick P.

    2005-08-09

    The effective coherence of a synchrotron beam line can be tailored to projection lithography requirements by employing a moving holographic diffuser and a stationary low-cost spherical mirror. The invention is particularly suited for use in an illuminator device for an optical image processing system requiring partially coherent illumination. The illuminator includes: (1) a synchrotron source of coherent or partially coherent radiation which has an intrinsic coherence that is higher than the desired coherence, (2) a holographic diffuser having a surface that receives incident radiation from said source, (3) means for translating the surface of the holographic diffuser in two dimensions along a plane that is parallel to the surface of the holographic diffuser wherein the rate of the motion is fast relative to integration time of said image processing system; and (4) a condenser optic that re-images the surface of the holographic diffuser to the entrance plane of said image processing system.

  12. White-Light Optical Information Processing and Holography.

    DTIC Science & Technology

    1984-06-22

    Processing, Image Deblurring , Source Encoding, Signal Sampling, Coherence Measurement, Noise Performance, / Pseudocolor Encoding. , ’ ’ * .~ 10.ASS!RACT...o 2.1 Broad Spectral Band Color Image Deblurring .. . 4 2.2 Noise Performance ...... ...... .. . 4 2.3 Pseudocolor Encoding with Three Primary...spectra. This technique is particularly suitable for linear smeared color image deblurring . 2.2 Noise Performance In this period, we have also

  13. Micro-seismic imaging using a source function independent full waveform inversion method

    NASA Astrophysics Data System (ADS)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  14. Hyperspectral Fluorescence and Reflectance Imaging Instrument

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; O'Neal, S. Duane; Lanoue, Mark; Russell, Jeffrey

    2008-01-01

    The system is a single hyperspectral imaging instrument that has the unique capability to acquire both fluorescence and reflectance high-spatial-resolution data that is inherently spatially and spectrally registered. Potential uses of this instrument include plant stress monitoring, counterfeit document detection, biomedical imaging, forensic imaging, and general materials identification. Until now, reflectance and fluorescence spectral imaging have been performed by separate instruments. Neither a reflectance spectral image nor a fluorescence spectral image alone yields as much information about a target surface as does a combination of the two modalities. Before this system was developed, to benefit from this combination, analysts needed to perform time-consuming post-processing efforts to co-register the reflective and fluorescence information. With this instrument, the inherent spatial and spectral registration of the reflectance and fluorescence images minimizes the need for this post-processing step. The main challenge for this technology is to detect the fluorescence signal in the presence of a much stronger reflectance signal. To meet this challenge, the instrument modulates artificial light sources from ultraviolet through the visible to the near-infrared part of the spectrum; in this way, both the reflective and fluorescence signals can be measured through differencing processes to optimize fluorescence and reflectance spectra as needed. The main functional components of the instrument are a hyperspectral imager, an illumination system, and an image-plane scanner. The hyperspectral imager is a one-dimensional (line) imaging spectrometer that includes a spectrally dispersive element and a two-dimensional focal plane detector array. The spectral range of the current imaging spectrometer is between 400 to 1,000 nm, and the wavelength resolution is approximately 3 nm. The illumination system consists of narrowband blue, ultraviolet, and other discrete wavelength light-emitting-diode (LED) sources and white-light LED sources designed to produce consistently spatially stable light. White LEDs provide illumination for the measurement of reflectance spectra, while narrowband blue and UV LEDs are used to excite fluorescence. Each spectral type of LED can be turned on or off depending on the specific remote-sensing process being performed. Uniformity of illumination is achieved by using an array of LEDs and/or an integrating sphere or other diffusing surface. The image plane scanner uses a fore optic with a field of view large enough to provide an entire scan line on the image plane. It builds up a two-dimensional image in pushbroom fashion as the target is scanned across the image plane either by moving the object or moving the fore optic. For fluorescence detection, spectral filtering of a narrowband light illumination source is sometimes necessary to minimize the interference of the source spectrum wings with the fluorescence signal. Spectral filtering is achieved with optical interference filters and absorption glasses. This dual spectral imaging capability will enable the optimization of reflective, fluorescence, and fused datasets as well as a cost-effective design for multispectral imaging solutions. This system has been used in plant stress detection studies and in currency analysis.

  15. Analyzing microtomography data with Python and the scikit-image library.

    PubMed

    Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan

    2017-01-01

    The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.

  16. Development of Advanced Signal Processing and Source Imaging Methods for Superparamagnetic Relaxometry

    PubMed Central

    Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.

    2017-01-01

    Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579

  17. The Next Generation of HLA Image Products

    NASA Astrophysics Data System (ADS)

    Gaffney, N. I.; Casertano, S.; Ferguson, B.

    2012-09-01

    We present the re-engineered pipeline based on existing and improved algorithms with the aim of improving processing quality, cross-instrument portability, data flow management, and software maintenance. The Hubble Legacy Archive (HLA) is a project to add value to the Hubble Space Telescope data archive by producing and delivering science-ready drizzled data products and source lists derived from these products. Initially, ACS, NICMOS, and WFCP2 data were combined using instrument-specific pipelines based on scripts developed to process the ACS GOODS data and a separate set of scripts to generate source extractor and DAOPhot source lists. The new pipeline, initially designed for WFC3 data, isolates instrument-specific processing and is easily extendable to other instruments and to generating wide-area mosaics. Significant improvements have been made in image combination using improved alignment, source detection, and background equalization routines. It integrates improved alignment procedures, better noise model, and source list generation within a single code base. Wherever practical, PyRAF based routines have been replaced with non-IRAF based python libraries (e.g. NumPy and PyFITS). The data formats have been modified to handle better and more consistent propagation of information from individual exposures to the combined products. A new exposure layer stores the effective exposure time for each pixel in the sky which is key in properly interpreting combined images from diverse data that were not initially planned to be mosaiced. We worked to improve the validity of the metadata within our FITS headers for these products relative to standard IRAF/PyRAF processing. Any keywords that pertain to individual exposures have been removed from the primary and extension headers and placed in a table extension for more direct and efficient perusal. This mechanism also allows for more detailed information on the processing of individual images to be stored and propagated providing a more hierarchical metadata storage system than key value pair FITS headers provide. In this poster we will discuss the changes to the pipeline processing and source list generation and the lessons learned which may be applicable to other archive projects as well as discuss our new metadata curation and preservation process.

  18. BioImageXD: an open, general-purpose and high-throughput image-processing platform.

    PubMed

    Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J

    2012-06-28

    BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.

  19. anisotropic microseismic focal mechanism inversion by waveform imaging matching

    NASA Astrophysics Data System (ADS)

    Wang, L.; Chang, X.; Wang, Y.; Xue, Z.

    2016-12-01

    The focal mechanism is one of the most important parameters in source inversion, for both natural earthquakes and human-induced seismic events. It has been reported to be useful for understanding stress distribution and evaluating the fracturing effect. The conventional focal mechanism inversion method picks the first arrival waveform of P wave. This method assumes the source as a Double Couple (DC) type and the media isotropic, which is usually not the case for induced seismic focal mechanism inversion. For induced seismic events, the inappropriate source and media model in inversion processing, by introducing ambiguity or strong simulation errors, will seriously reduce the inversion effectiveness. First, the focal mechanism contains significant non-DC source type. Generally, the source contains three components: DC, isotropic (ISO) and the compensated linear vector dipole (CLVD), which makes focal mechanisms more complicated. Second, the anisotropy of media will affect travel time and waveform to generate inversion bias. The common way to describe focal mechanism inversion is based on moment tensor (MT) inversion which can be decomposed into the combination of DC, ISO and CLVD components. There are two ways to achieve MT inversion. The wave-field migration method is applied to achieve moment tensor imaging. This method can construct elements imaging of MT in 3D space without picking the first arrival, but the retrieved MT value is influenced by imaging resolution. The full waveform inversion is employed to retrieve MT. In this method, the source position and MT can be reconstructed simultaneously. However, this method needs vast numerical calculation. Moreover, the source position and MT also influence each other in the inversion process. In this paper, the waveform imaging matching (WIM) method is proposed, which combines source imaging with waveform inversion for seismic focal mechanism inversion. Our method uses the 3D tilted transverse isotropic (TTI) elastic wave equation to approximate wave propagating in anisotropic media. First, a source imaging procedure is employed to obtain the source position. Second, we refine a waveform inversion algorithm to retrieve MT. We also use a microseismic data set recorded in surface acquisition to test our method.

  20. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  1. The Impact of a New Speckle Holography Analysis on the Galactic Center Orbits Initiative

    NASA Astrophysics Data System (ADS)

    Mangian, John; Ghez, Andrea; Gautam, Abhimat; Gallego, Laly; Schödel, Rainer; Lu, Jessica; Chen, Zhuo; UCLA Galactic Center Group; W.M. Keck Observatory Staff

    2018-01-01

    The Galactic Center Orbit Initiative has used two decades of high angular resolution imaging data from the W. M. Keck Observatory to make astrometric measurements of stellar motion around our Galaxy's central supermassive black hole. We present an analysis of a new approach to ten years of speckle imaging data (1995 - 2005) that has been processed with a new holography analysis. This analysis has (1) improved the image quality near the edge of the combined speckle frame and (2) increased the depth of the images and therefore increased the number of sources detected throughout the entire image. By directly comparing each holography analysis, we find a 41% increase in total detected sources and a 81% increase in sources further than 3" from the central black hole (SgrA*). Further, we find a 49% increase in sources of K-band magnitude greater than the old holography limiting magnitude due to the reduction of light halos surrounding bright sources.

  2. Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform

    NASA Astrophysics Data System (ADS)

    Liu, H. S.; Liao, H. M.

    2015-08-01

    Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.

  3. Digital Image Processing Overview For Helmet Mounted Displays

    NASA Astrophysics Data System (ADS)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  4. Free and open source software for the manipulation of digital images.

    PubMed

    Solomon, Robert W

    2009-06-01

    Free and open source software is a type of software that is nearly as powerful as commercial software but is freely downloadable. This software can do almost everything that the expensive programs can. GIMP (gnu image manipulation program) is the free program that is comparable to Photoshop, and versions are available for Windows, Macintosh, and Linux platforms. This article briefly describes how GIMP can be installed and used to manipulate radiology images. It is no longer necessary to budget large amounts of money for high-quality software to achieve the goals of image processing and document creation because free and open source software is available for the user to download at will.

  5. A new multi-spectral feature level image fusion method for human interpretation

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-03-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  6. Open source bioimage informatics for cell biology.

    PubMed

    Swedlow, Jason R; Eliceiri, Kevin W

    2009-11-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery.

  7. Multispectral image fusion for target detection

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-09-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  8. An imaging system for PLIF/Mie measurements for a combusting flow

    NASA Technical Reports Server (NTRS)

    Wey, C. C.; Ghorashi, B.; Marek, C. J.; Wey, C.

    1990-01-01

    The equipment required to establish an imaging system can be divided into four parts: (1) the light source and beam shaping optics; (2) camera and recording; (3) image acquisition and processing; and (4) computer and output systems. A pulsed, Nd:YAG-pummped, frequency-doubled dye laser which can freeze motion in the flowfield is used for an illumination source. A set of lenses is used to form the laser beam into a sheet. The induced fluorescence is collected by an UV-enhanced lens and passes through an UV-enhanced microchannel plate intensifier which is optically coupled to a gated solid state CCD camera. The output of the camera is simultaneously displayed on a monitor and recorded on either a laser videodisc set of a Super VHS VCR. This videodisc set is controlled by a minicomputer via a connection to the RS-232C interface terminals. The imaging system is connected to the host computer by a bus repeater and can be multiplexed between four video input sources. Sample images from a planar shear layer experiment are presented to show the processing capability of the imaging system with the host computer.

  9. In-vivo multi-nonlinear optical imaging of a living cell using a supercontinuum light source generated from a photonic crystal fiber

    NASA Astrophysics Data System (ADS)

    Kano, Hideaki; Hamaguchi, Hiro-O.

    2006-04-01

    A supercontinuum light source generated with a femtosecond Ti:Sapphire oscillator has been used to obtain both vibrational and two-photon excitation fluorescence (TPEF) images of a living cell simultaneously at different wavelengths. Owing to an ultrabroadband spectral profile of the supercontinuum, multiple vibrational resonances have been detected through coherent anti-Stokes Raman scattering (CARS) process. In addition to the multiplex CARS process, multiple electronic states can be excited due to the broadband electronic two-photon excitation using the supercontinuum, giving rise to a two-photon excitation fluorescence (TPEF) signal. Using a living yeast cell whose nucleus is labeled by green fluorescent protein (GFP), we have succeeded in visualizing organelles such as mitochondria, septum, and nucleus through the CARS and the TPEF processes. The supercontinuum enables us to perform unique multi-nonlinear optical imaging through two different nonlinear optical processes.

  10. An open data mining framework for the analysis of medical images: application on obstructive nephropathy microscopy images.

    PubMed

    Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias

    2010-01-01

    This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.

  11. Data processing from lobster eye type optics

    NASA Astrophysics Data System (ADS)

    Nentvich, Ondrej; Stehlikova, Veronika; Urban, Martin; Hudec, Rene; Sieger, Ladislav

    2017-05-01

    Wolter I optics are commonly used for imaging in X-Ray spectrum. This system uses two reflections, and at higher energies, this system is not so much efficient but has a very good optical resolution. Here is another type of optics Lobster Eye, which is using also two reflections for focusing rays in Schmidt's or Angel's arrangement. Here is also possible to use Lobster eye optics as two one dimensional independent optics. This paper describes advantages of one dimensional and two dimensional Lobster Eye optics in Schmidt's arrangement and its data processing - find out a number of sources in wide field of view. Two dimensional (2D) optics are suitable to detect the number of point X-ray sources and their magnitude, but it is necessary to expose for a long time because a 2D system has much lower transitivity, due to double reflection, compared to one dimensional (1D) optics. Not only for this reason, two 1D optics are better to use for lower magnitudes of sources. In this case, additional image processing is necessary to achieve a 2D image. This article describes of approach an image reconstruction and advantages of two 1D optics without significant losses of transitivity.

  12. Global high-frequency source imaging accounting for complexity in Green's functions

    NASA Astrophysics Data System (ADS)

    Lambert, V.; Zhan, Z.

    2017-12-01

    The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.

  13. Noise properties and task-based evaluation of diffraction-enhanced imaging

    PubMed Central

    Brankov, Jovan G.; Saiz-Herranz, Alejandro; Wernick, Miles N.

    2014-01-01

    Abstract. Diffraction-enhanced imaging (DEI) is an emerging x-ray imaging method that simultaneously yields x-ray attenuation and refraction images and holds great promise for soft-tissue imaging. The DEI has been mainly studied using synchrotron sources, but efforts have been made to transition the technology to more practical implementations using conventional x-ray sources. The main technical challenge of this transition lies in the relatively lower x-ray flux obtained from conventional sources, leading to photon-limited data contaminated by Poisson noise. Several issues that must be understood in order to design and optimize DEI imaging systems with respect to noise performance are addressed. Specifically, we: (a) develop equations describing the noise properties of DEI images, (b) derive the conditions under which the DEI algorithm is statistically optimal, (c) characterize the imaging performance that can be obtained as measured by task-based metrics, and (d) consider image-processing steps that may be employed to mitigate noise effects. PMID:26158056

  14. TRIIG - Time-lapse reproduction of images through interactive graphics. [digital processing of quality hard copy

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.

  15. ARTIP: Automated Radio Telescope Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  16. ELAS: A powerful, general purpose image processing package

    NASA Technical Reports Server (NTRS)

    Walters, David; Rickman, Douglas

    1991-01-01

    ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.

  17. A functional magnetic resonance imaging investigation of short-term source and item memory for negative pictures.

    PubMed

    Mitchell, Karen J; Mather, Mara; Johnson, Marcia K; Raye, Carol L; Greene, Erich J

    2006-10-02

    We investigated the hypothesis that arousal recruits attention to item information, thereby disrupting working memory processes that help bind items to context. Using functional magnetic resonance imaging, we compared brain activity when participants remembered negative or neutral picture-location conjunctions (source memory) versus pictures only. Behaviorally, negative trials showed disruption of short-term source, but not picture, memory; long-term picture recognition memory was better for negative than for neutral pictures. Activity in areas involved in working memory and feature integration (precentral gyrus and its intersect with superior temporal gyrus) was attenuated on negative compared with neutral source trials relative to picture-only trials. Visual processing areas (middle occipital and lingual gyri) showed greater activity for negative than for neutral trials, especially on picture-only trials.

  18. Spatial Statistical Data Fusion for Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, Hai

    2010-01-01

    Data fusion is the process of combining information from heterogeneous sources into a single composite picture of the relevant process, such that the composite picture is generally more accurate and complete than that derived from any single source alone. Data collection is often incomplete, sparse, and yields incompatible information. Fusion techniques can make optimal use of such data. When investment in data collection is high, fusion gives the best return. Our study uses data from two satellites: (1) Multiangle Imaging SpectroRadiometer (MISR), (2) Moderate Resolution Imaging Spectroradiometer (MODIS).

  19. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

  20. Teaching Image Formation by Extended Light Sources: The Use of a Model Derived from the History of Science

    ERIC Educational Resources Information Center

    Dedes, Christos; Ravanis, Konstantinos

    2009-01-01

    This research, carried out in Greece on pupils aged 12-16, focuses on the transformation of their representations concerning light emission and image formation by extended light sources. The instructive process was carried out in two stages, each one having a different, distinct target set. During the first stage, the appropriate conflict…

  1. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  2. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  3. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  4. Astronomy In The Cloud: Using Mapreduce For Image Coaddition

    NASA Astrophysics Data System (ADS)

    Wiley, Keith; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-01-01

    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computational challenges such as anomaly detection, classification, and moving object tracking. Since such studies require the highest quality data, methods such as image coaddition, i.e., registration, stacking, and mosaicing, will be critical to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources, e.g., asteroids, or transient objects, e.g., supernovae, these datastreams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, i.e., platforms where Hadoop is offered as a service. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results compring their performance. This work is funded by the NSF and by NASA.

  5. Astronomy in the Cloud: Using MapReduce for Image Co-Addition

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-03-01

    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.

  6. Electrophysiological evidence for separation between human face and non-face object processing only in the right hemisphere.

    PubMed

    Niina, Megumi; Okamura, Jun-ya; Wang, Gang

    2015-10-01

    Scalp event-related potential (ERP) studies have demonstrated larger N170 amplitudes when subjects view faces compared to items from object categories. Extensive attempts have been made to clarify face selectivity and hemispheric dominance for face processing. The purpose of this study was to investigate hemispheric differences in N170s activated by human faces and non-face objects, as well as the extent of overlap of their sources. ERP was recorded from 20 subjects while they viewed human face and non-face images. N170s obtained during the presentation of human faces appeared earlier and with larger amplitude than for other category images. Further source analysis with a two-dipole model revealed that the locations of face and object processing largely overlapped in the left hemisphere. Conversely, the source for face processing in the right hemisphere located more anterior than the source for object processing. The results suggest that the neuronal circuits for face and object processing are largely shared in the left hemisphere, with more distinct circuits in the right hemisphere. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Gaia Data Release 1. Pre-processing and source list creation

    NASA Astrophysics Data System (ADS)

    Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.

    2016-11-01

    Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.

  8. Assesment on the performance of electrode arrays using image processing technique

    NASA Astrophysics Data System (ADS)

    Usman, N.; Khiruddin, A.; Nawawi, Mohd

    2017-08-01

    Interpreting inverted resistivity section is time consuming, tedious and requires other sources of information to be relevant geologically. Image processing technique was used in order to perform post inversion processing which make geophysical data interpretation easier. The inverted data sets were imported into the PCI Geomatica 9.0.1 for further processing. The data sets were clipped and merged together in order to match the coordinates of the three layers and permit pixel to pixel analysis. Dipole-dipole array is more sensitive to resistivity variation with depth in comparison with Werner-Schlumberger and pole-dipole. Image processing serves as good post-inversion tool in geophysical data processing.

  9. "But I Like My Body": Positive body image characteristics and a holistic model for young-adult women.

    PubMed

    Wood-Barcalow, Nichole L; Tylka, Tracy L; Augustus-Horvath, Casey L

    2010-03-01

    Extant body image research has provided a rich understanding of negative body image but a rather underdeveloped depiction of positive body image. Thus, this study used Grounded Theory to analyze interviews from 15 college women classified as having positive body image and five body image experts. Many characteristics of positive body image emerged, including appreciating the unique beauty and functionality of their body, filtering information (e.g., appearance commentary, media ideals) in a body-protective manner, defining beauty broadly, and highlighting their body's assets while minimizing perceived imperfections. A holistic model emerged: when women processed mostly positive and rejected negative source information, their body investment decreased and body evaluation became more positive, illustrating the fluidity of body image. Women reciprocally influenced these sources (e.g., mentoring others to love their bodies, surrounding themselves with others who promote body acceptance, taking care of their health), which, in turn, promoted increased positive source information. Copyright 2010. Published by Elsevier Ltd.

  10. Image fusion

    NASA Technical Reports Server (NTRS)

    Pavel, M.

    1993-01-01

    The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.

  11. Open source bioimage informatics for cell biology

    PubMed Central

    Swedlow, Jason R.; Eliceiri, Kevin W.

    2009-01-01

    Significant technical advances in imaging, molecular biology and genomics have fueled a revolution in cell biology, in that the molecular and structural processes of the cell are now visualized and measured routinely. Driving much of this recent development has been the advent of computational tools for the acquisition, visualization, analysis and dissemination of these datasets. These tools collectively make up a new subfield of computational biology called bioimage informatics, which is facilitated by open source approaches. We discuss why open source tools for image informatics in cell biology are needed, some of the key general attributes of what make an open source imaging application successful, and point to opportunities for further operability that should greatly accelerate future cell biology discovery. PMID:19833518

  12. A new hue capturing technique for the quantitative interpretation of liquid crystal images used in convective heat transfer studies

    NASA Technical Reports Server (NTRS)

    Camci, C.; Kim, K.; Hippensteele, S. A.

    1992-01-01

    A new image processing based color capturing technique for the quantitative interpretation of liquid crystal images used in convective heat transfer studies is presented. This method is highly applicable to the surfaces exposed to convective heating in gas turbine engines. It is shown that, in the single-crystal mode, many of the colors appearing on the heat transfer surface correlate strongly with the local temperature. A very accurate quantitative approach using an experimentally determined linear hue vs temperature relation is found to be possible. The new hue-capturing process is discussed in terms of the strength of the light source illuminating the heat transfer surface, the effect of the orientation of the illuminating source with respect to the surface, crystal layer uniformity, and the repeatability of the process. The present method is more advantageous than the multiple filter method because of its ability to generate many isotherms simultaneously from a single-crystal image at a high resolution in a very time-efficient manner.

  13. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information.

    PubMed

    Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L

    2013-02-12

    Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.

  14. Unified Digital Image Display And Processing System

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Maguire, Gerald Q.; Noz, Marilyn E.; Schimpf, James H.

    1981-11-01

    Our institution like many others, is faced with a proliferation of medical imaging techniques. Many of these methods give rise to digital images (e.g. digital radiography, computerized tomography (CT) , nuclear medicine and ultrasound). We feel that a unified, digital system approach to image management (storage, transmission and retrieval), image processing and image display will help in integrating these new modalities into the present diagnostic radiology operations. Future techniques are likely to employ digital images, so such a system could readily be expanded to include other image sources. We presently have the core of such a system. We can both view and process digital nuclear medicine (conventional gamma camera) images, positron emission tomography (PET) and CT images on a single system. Images from our recently installed digital radiographic unit can be added. Our paper describes our present system, explains the rationale for its configuration, and describes the directions in which it will expand.

  15. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    NASA Astrophysics Data System (ADS)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.

  16. Automated method for the rapid and precise estimation of adherent cell culture characteristics from phase contrast microscopy images.

    PubMed

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-03-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. © 2013 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.

  17. Automated Method for the Rapid and Precise Estimation of Adherent Cell Culture Characteristics from Phase Contrast Microscopy Images

    PubMed Central

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-01-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521

  18. Lobster eye X-ray optics: Data processing from two 1D modules

    NASA Astrophysics Data System (ADS)

    Nentvich, O.; Urban, M.; Stehlikova, V.; Sieger, L.; Hudec, R.

    2017-07-01

    The X-ray imaging is usually done by Wolter I telescopes. They are suitable for imaging of a small part of the sky, not for all-sky monitoring. This monitoring could be done by a Lobster eye optics which can theoretically have a field of view up to 360 deg. All sky monitoring system enables a quick identification of source and its direction. This paper describes the possibility of using two independent one-dimensional Lobster Eye modules for this purpose instead of Wolter I and their post-processing into an 2D image. This arrangement allows scanning with less energy loss compared to Wolter I or two-dimensional Lobster Eye optics. It is most suitable especially for very weak sources.

  19. A new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Ma, Yayun; Han, Shaokun; Wang, Yulin; Liu, Fei; Zhai, Yu

    2018-06-01

    One of the most important goals of research on three-dimensional nonscanning laser imaging systems is the improvement of the illumination system. In this paper, a new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array is proposed. This array is obtained using a fiber array connected to a laser array with each unit laser having independent control circuits. This system uses a point-to-point imaging process, which is realized using the exact corresponding optical relationship between the point-light-source array and a linear-mode avalanche photodiode array detector. The complete working process of this system is explained in detail, and the mathematical model of this system containing four equations is established. A simulated contrast experiment and two real contrast experiments which use the simplified setup without a laser array are performed. The final results demonstrate that unlike a conventional three-dimensional nonscanning laser imaging system, the proposed system meets all the requirements of an eligible illumination system. Finally, the imaging performance of this system is analyzed under defocusing situations, and analytical results show that the system has good defocusing robustness and can be easily adjusted in real applications.

  20. A robust hidden Markov Gauss mixture vector quantizer for a noisy source.

    PubMed

    Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M

    2009-07-01

    Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.

  1. THE GINI COEFFICIENT AS A MORPHOLOGICAL MEASUREMENT OF STRONGLY LENSED GALAXIES IN THE IMAGE PLANE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florian, Michael K.; Li, Nan; Gladders, Michael D.

    2016-12-01

    Characterization of the morphology of strongly lensed galaxies is challenging because images of such galaxies are typically highly distorted. Lens modeling and source plane reconstruction is one approach that can provide reasonably undistorted images from which morphological measurements can be made, though at the expense of a highly spatially variable telescope point-spread function (PSF) when mapped back to the source plane. Unfortunately, modeling the lensing mass is a time- and resource-intensive process, and in many cases there are too few constraints to precisely model the lensing mass. If, however, useful morphological measurements could be made in the image plane rathermore » than the source plane, it would bypass this issue and obviate the need for a source reconstruction process for some applications. We examine the use of the Gini coefficient as one such measurement. Because it depends on the cumulative distribution of the light of a galaxy, but not the relative spatial positions, the fact that surface brightness is conserved by lensing means that the Gini coefficient may be well preserved by strong gravitational lensing. Through simulations, we test the extent to which the Gini coefficient is conserved, including by effects due to PSF convolution and pixelization, to determine whether it is invariant enough under lensing to be used as a measurement of galaxy morphology that can be made in the image plane.« less

  2. The GINI coefficient as a morphological measurement of strongly lensed galaxies in the image plane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florian, Michael K.; Li, Nan; Gladders, Michael D.

    2016-11-30

    Characterization of the morphology of strongly lensed galaxies is challenging because images of such galaxies are typically highly distorted. Lens modeling and source plane reconstruction is one approach that can provide reasonably undistorted images from which morphological measurements can be made, though at the expense of a highly spatially variable telescope point-spread function (PSF) when mapped back to the source plane. Unfortunately, modeling the lensing mass is a time-and resource-intensive process, and in many cases there are too few constraints to precisely model the lensing mass. If, however, useful morphological measurements could be made in the image plane rather thanmore » the source plane, it would bypass this issue and obviate the need for a source reconstruction process for some applications. We examine the use of the Gini coefficient as one such measurement. Because it depends on the cumulative distribution of the light of a galaxy, but not the relative spatial positions, the fact that surface brightness is conserved by lensing means that the Gini coefficient may be well preserved by strong gravitational lensing. Through simulations, we test the extent to which the Gini coefficient is conserved, including by effects due to PSF convolution and pixelization, to determine whether it is invariant enough under lensing to be used as a measurement of galaxy morphology that can be made in the image plane.« less

  3. The ImageJ ecosystem: an open platform for biomedical image analysis

    PubMed Central

    Schindelin, Johannes; Rueden, Curtis T.; Hiner, Mark C.; Eliceiri, Kevin W.

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available – from commercial to academic, special-purpose to Swiss army knife, small to large–but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts life science, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368

  4. The ImageJ ecosystem: An open platform for biomedical image analysis.

    PubMed

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.

  5. A Precise Visual Method for Narrow Butt Detection in Specular Reflection Workpiece Welding

    PubMed Central

    Zeng, Jinle; Chang, Baohua; Du, Dong; Hong, Yuxiang; Chang, Shuhe; Zou, Yirong

    2016-01-01

    During the complex path workpiece welding, it is important to keep the welding torch aligned with the groove center using a visual seam detection method, so that the deviation between the torch and the groove can be corrected automatically. However, when detecting the narrow butt of a specular reflection workpiece, the existing methods may fail because of the extremely small groove width and the poor imaging quality. This paper proposes a novel detection method to solve these issues. We design a uniform surface light source to get high signal-to-noise ratio images against the specular reflection effect, and a double-line laser light source is used to obtain the workpiece surface equation relative to the torch. Two light sources are switched on alternately and the camera is synchronized to capture images when each light is on; then the position and pose between the torch and the groove can be obtained nearly at the same time. Experimental results show that our method can detect the groove effectively and efficiently during the welding process. The image resolution is 12.5 μm and the processing time is less than 10 ms per frame. This indicates our method can be applied to real-time narrow butt detection during high-speed welding process. PMID:27649173

  6. A Precise Visual Method for Narrow Butt Detection in Specular Reflection Workpiece Welding.

    PubMed

    Zeng, Jinle; Chang, Baohua; Du, Dong; Hong, Yuxiang; Chang, Shuhe; Zou, Yirong

    2016-09-13

    During the complex path workpiece welding, it is important to keep the welding torch aligned with the groove center using a visual seam detection method, so that the deviation between the torch and the groove can be corrected automatically. However, when detecting the narrow butt of a specular reflection workpiece, the existing methods may fail because of the extremely small groove width and the poor imaging quality. This paper proposes a novel detection method to solve these issues. We design a uniform surface light source to get high signal-to-noise ratio images against the specular reflection effect, and a double-line laser light source is used to obtain the workpiece surface equation relative to the torch. Two light sources are switched on alternately and the camera is synchronized to capture images when each light is on; then the position and pose between the torch and the groove can be obtained nearly at the same time. Experimental results show that our method can detect the groove effectively and efficiently during the welding process. The image resolution is 12.5 μm and the processing time is less than 10 ms per frame. This indicates our method can be applied to real-time narrow butt detection during high-speed welding process.

  7. CellAnimation: an open source MATLAB framework for microscopy assays.

    PubMed

    Georgescu, Walter; Wikswo, John P; Quaranta, Vito

    2012-01-01

    Advances in microscopy technology have led to the creation of high-throughput microscopes that are capable of generating several hundred gigabytes of images in a few days. Analyzing such wealth of data manually is nearly impossible and requires an automated approach. There are at present a number of open-source and commercial software packages that allow the user to apply algorithms of different degrees of sophistication to the images and extract desired metrics. However, the types of metrics that can be extracted are severely limited by the specific image processing algorithms that the application implements, and by the expertise of the user. In most commercial software, code unavailability prevents implementation by the end user of newly developed algorithms better suited for a particular type of imaging assay. While it is possible to implement new algorithms in open-source software, rewiring an image processing application requires a high degree of expertise. To obviate these limitations, we have developed an open-source high-throughput application that allows implementation of different biological assays such as cell tracking or ancestry recording, through the use of small, relatively simple image processing modules connected into sophisticated imaging pipelines. By connecting modules, non-expert users can apply the particular combination of well-established and novel algorithms developed by us and others that are best suited for each individual assay type. In addition, our data exploration and visualization modules make it easy to discover or select specific cell phenotypes from a heterogeneous population. CellAnimation is distributed under the Creative Commons Attribution-NonCommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/). CellAnimationsource code and documentation may be downloaded from www.vanderbilt.edu/viibre/software/documents/CellAnimation.zip. Sample data are available at www.vanderbilt.edu/viibre/software/documents/movies.zip. walter.georgescu@vanderbilt.edu Supplementary data available at Bioinformatics online.

  8. Multiscale image processing and antiscatter grids in digital radiography.

    PubMed

    Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D

    2009-01-01

    Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.

  9. VizieR Online Data Catalog: Hubble Source Catalog (V1 and V2) (Whitmore+, 2016)

    NASA Astrophysics Data System (ADS)

    Whitmore, B. C.; Allam, S. S.; Budavari, T.; Casertano, S.; Downes, R. A.; Donaldson, T.; Fall, S. M.; Lubow, S. H.; Quick, L.; Strolger, L.-G.; Wallace, G.; White, R. L.

    2016-10-01

    The HSC v1 contains members of the WFPC2, ACS/WFC, WFC3/UVIS and WFC3/IR Source Extractor source lists from HLA version DR8 (data release 8). The crossmatching process involves adjusting the relative astrometry of overlapping images so as to minimize positional offsets between closely aligned sources in different images. After correction, the astrometric residuals of crossmatched sources are significantly reduced, to typically less than 10mas. The relative astrometry is supported by using Pan-STARRS, SDSS, and 2MASS as the astrometric backbone for initial corrections. In addition, the catalog includes source nondetections. The crossmatching algorithms and the properties of the initial (Beta 0.1) catalog are described in Budavari & Lubow (2012ApJ...761..188B). The HSC v2 contains members of the WFPC2, ACS/WFC, WFC3/UVIS and WFC3/IR Source Extractor source lists from HLA version DR9.1 (data release 9.1). The crossmatching process involves adjusting the relative astrometry of overlapping images so as to minimize positional offsets between closely aligned sources in different images. After correction, the astrometric residuals of crossmatched sources are significantly reduced, to typically less than 10mas. The relative astrometry is supported by using Pan-STARRS, SDSS, and 2MASS as the astrometric backbone for initial corrections. In addition, the catalog includes source nondetections. The crossmatching algorithms and the properties of the initial (Beta 0.1) catalog are described in Budavari & Lubow (2012ApJ...761..188B). Hubble Source Catalog Acknowledgement: Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESAC/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). (2 data files).

  10. The Stellar Imager (SI) Project: Resolving Stellar Surfaces, Interiors, and Magnetic Activity

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth G.; Schrijver, K.; Karovska, M.

    2007-01-01

    The Stellar Imager (SI) is a UV/Optical. Space-Based Interferometer designed to enable 0.1 milli-arcsec (mas) spectral imaging of stellar surfaces and, via asteroseismology, stellar interiors and of the Universe in general. The ultra-sharp images of SI will revolutionize our view of many dynamic astrophysical processes by transforming point sources into extended sources, and snapshots into evolving views. The science of SI focuses on the role of magnetism in the Universe, particularly on magnetic activity on the surfaces of stars like the Sun. Its prime goal is to enable long-term forecasting of solar activity and the space weather that it drives. SI will also revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magneto-hydrodynamically controlled processes in the Universe. In this paper we discuss the science goals, technology needs, and baseline design of the SI mission.

  11. SI: The Stellar Imager

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth G.; Schrijver, Carolus J.; Karovska, Margarita

    2006-01-01

    The ultra-sharp images of the Stellar Imager (SI) will revolutionize our view of many dynamic astrophysical processes: The 0.1 milliarcsec resolution of this deep-space telescope will transform point sources into extended sources, and simple snapshots into spellbinding evolving views. SI s science focuses on the role of magnetism in the Universe, particularly on magnetic activity on the surfaces of stars like the Sun. SI s prime goal is to enable long-term forecasting of solar activity and the space weather that it drives in support of the Living With a Star program in the Exploration Era by imaging a sample of magnetically active stars with enough resolution to map their evolving dynamo patterns and their internal flows. By exploring the Universe at ultra-high resolution, SI will also revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magnetohydrodynamically controlled structures and processes in the Universe.

  12. Trends in optical coherence tomography applied to medical imaging

    NASA Astrophysics Data System (ADS)

    Podoleanu, Adrian G.

    2014-01-01

    The number of publications on optical coherence tomography (OCT) continues to double every three years. Traditionally applied to imaging the eye, OCT is now being extended to fields outside ophthalmology and optometry. Widening its applicability, progress in the core engine of the technology, and impact on development of novel optical sources, make OCT a very active and rapidly evolving field. Trends in the developments of different specific devices, such as optical sources, optical configurations and signal processing will be presented. Encompassing studies on both the configurations as well as on signal processing themes, current research in Kent looks at combining spectral domain with time domain imaging for long axial range and simultaneous imaging at several depths. Results of the collaborative work of the Applied Optics Group in Kent with organisers of this conference will be presented, with reference to 3D monitoring of abfraction.

  13. Transmission ultrasonography. [time delay spectrometry for soft tissue transmission imaging

    NASA Technical Reports Server (NTRS)

    Heyser, R. C.; Le Croissette, D. H.

    1973-01-01

    Review of the results of the application of an advanced signal-processing technique, called time delay spectrometry, in obtaining soft tissue transmission images by transmission ultrasonography, both in vivo and in vitro. The presented results include amplitude ultrasound pictures and phase ultrasound pictures obtained by this technique. While amplitude ultrasonographs of tissue are closely analogous to X-ray pictures in that differential absorption is imaged, phase ultrasonographs represent an entirely new source of information based on differential time of propagation. Thus, a new source of information is made available for detailed analysis.

  14. Processing digital images and calculation of beam emittance (pepper-pot method for the Krion source)

    NASA Astrophysics Data System (ADS)

    Alexandrov, V. S.; Donets, E. E.; Nyukhalova, E. V.; Kaminsky, A. K.; Sedykh, S. N.; Tuzikov, A. V.; Philippov, A. V.

    2016-12-01

    Programs for the pre-processing of photographs of beam images on the mask based on Wolfram Mathematica and Origin software are described. Angles of rotation around the axis and in the vertical plane are taken into account in the generation of the file with image coordinates. Results of the emittance calculation by the Pep_emit program written in Visual Basic using the generated file in the test mode are presented.

  15. Open source software in a practical approach for post processing of radiologic images.

    PubMed

    Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea

    2015-03-01

    The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.

  16. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    PubMed

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  17. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    PubMed

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  18. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  19. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  20. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  1. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  2. Development and Application of a Low Frequency Near-Field Interferometric-TOA 3D Lightning Mapping Array

    NASA Astrophysics Data System (ADS)

    Lyu, F.; Cummer, S. A.; Weinert, J. L.; McTague, L. E.; Solanki, R.; Barrett, J.

    2014-12-01

    Lightning processes radiated extremely wideband electromagnetic signals. Lightning images mapped by VHF interferometry and VHF time of arrival lightning mapping arrays enable us to understand the lightning in-cloud detail development during the extent of flash that can not always be captured by cameras because of the shield of cloud. Lightning processes radiate electromagnetically over an extremely wide bandwidth, offering the possibility of multispectral lightning radio imaging. Low frequency signals are often used for lightning detection, but usually only for ground point location or thunderstorm tracking. Some recent results have demonstrated lightning LF 3D mapping of discrete lightning pulses, but imaging of continuous LF emissions have not been shown. In this work, we report a GPS-synchronized LF near field interferometric-TOA 3D lightning mapping array applied to image the development of lightning flashes on second time scale. Cross-correlation, as used in broadband interferometry, is applied in our system to find windowed arrival time differences with sub-microsecond time resolution. However, because the sources are in the near field of the array, time of arrival processing is used to find the source locations with a typical precision of 100 meters. We show that this system images the complete lightning flash structure with thousands of LF sources for extensive flashes. Importantly, this system is able to map both continuous emissions like dart leaders, and bursty or discrete emissions. Lightning stepped leader and dart leader propagation speeds are estimated to 0.56-2.5x105 m/s and 0.8-2.0x106 m/s respectively, which are consistent with previous reports. In many aspects our LF images are remarkably similar to VHF lightning mapping array images, despite the 1000 times difference in frequency, which may suggest some special links between the LF and VHF emission during lightning processes.

  3. Comparison of Open Source Compression Algorithms on Vhr Remote Sensing Images for Efficient Storage Hierarchy

    NASA Astrophysics Data System (ADS)

    Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.

    2016-06-01

    High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.

  4. Glow discharge sources for atomic and molecular analyses

    NASA Astrophysics Data System (ADS)

    Storey, Andrew Patrick

    Two types of glow discharges were used and characterized for chemical analysis. The flowing atmospheric pressure afterglow (FAPA) source, based on a helium glow discharge (GD), was utilized to analyze samples with molecular mass spectrometry. A second GD, operated at reduced pressure in argon, was employed to map the elemental composition of a solid surface with novel optical detection systems, enabling new applications and perspectives for GD emission spectrometry. Like many plasma-based ambient desorption-ionization sources being used around the world, the FAPA requires a supply of helium to operate effectively. With increased pressures on global helium supply and pricing, the use of an interrupted stream of helium for analysis was explored for vapor and solid samples. In addition to the mass spectra generated by the FAPA source, schlieren imaging and infrared thermography were employed to map the behavior of the source and its surroundings under the altered conditions. Additionally, a new annular microplasma variation of the FAPA source was developed and characterized. A spectroscopic imaging system that utilized an adjustable-tilt interference filter was used to map the elemental composition of a sample surface by glow discharge emission spectroscopy. This apparatus was compared to other GD imaging techniques for mapping elemental surface composition. The wide bandpass filter resulted in significant spectral interferences that could be partially overcome with chemometric data processing. Because time-resolved GD emission spectroscopy can provide fine depth-profiling measurements, a natural extension of GD imaging would be its application to three-dimensional characterization of samples. However, the simultaneous cathodic sputtering that occur across the sample results in a sampling process that is not completely predictable. These issues are frequently encountered when laterally varied samples are explored with glow discharge imaging techniques. These insights are described with respect to their consequences for both imaging and conventional GD spectroscopic techniques.

  5. Application of Laser Plasma Sources of Soft X-rays and Extreme Ultraviolet (EUV) in Imaging, Processing Materials and Photoionization Studies

    NASA Astrophysics Data System (ADS)

    Fiedorowicz, H.; Bartnik, A.; Wachulak, P. W.; Jarocki, R.; Kostecki, J.; Szczurek, M.; Ahad, I. U.; Fok, T.; Szczurek, A.; Wȩgrzyński, Ł.

    In the paper we present new applications of laser plasma sources of soft X-rays and extreme ultraviolet (EUV) in various areas of plasma physics, nanotechnology and biomedical engineering. The sources are based on a gas puff target irradiated with nanosecond laser pulses from commercial Nd: YAG lasers, generating pulses with time duration from 1 to 10 ns and energies from 0.5 to 10 J at a 10 Hz repetition rate. The targets are produced with the use of a double valve system equipped with a special nozzle to form a double-stream gas puff target which allows for high conversion efficiency of laser energy into soft X-rays and EUV without degradation of the nozzle. The sources are equipped with various optical systems to collect soft X-ray and EUV radiation and form the radiation beam. New applications of these sources in imaging, including EUV tomography and soft X-ray microscopy, processing of materials and photoionization studies are presented.

  6. Control Software for Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.

    2006-01-01

    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.

  7. Reference software implementation for GIFTS ground data processing

    NASA Astrophysics Data System (ADS)

    Garcia, R. K.; Howell, H. B.; Knuteson, R. O.; Martin, G. D.; Olson, E. R.; Smuga-Otto, M. J.

    2006-08-01

    Future satellite weather instruments such as high spectral resolution imaging interferometers pose a challenge to the atmospheric science and software development communities due to the immense data volumes they will generate. An open-source, scalable reference software implementation demonstrating the calibration of radiance products from an imaging interferometer, the Geosynchronous Imaging Fourier Transform Spectrometer1 (GIFTS), is presented. This paper covers essential design principles laid out in summary system diagrams, lessons learned during implementation and preliminary test results from the GIFTS Information Processing System (GIPS) prototype.

  8. Image Processing for Bioluminescence Resonance Energy Transfer Measurement-BRET-Analyzer.

    PubMed

    Chastagnier, Yan; Moutin, Enora; Hemonnot, Anne-Laure; Perroy, Julie

    2017-01-01

    A growing number of tools now allow live recordings of various signaling pathways and protein-protein interaction dynamics in time and space by ratiometric measurements, such as Bioluminescence Resonance Energy Transfer (BRET) Imaging. Accurate and reproducible analysis of ratiometric measurements has thus become mandatory to interpret quantitative imaging. In order to fulfill this necessity, we have developed an open source toolset for Fiji- BRET-Analyzer -allowing a systematic analysis, from image processing to ratio quantification. We share this open source solution and a step-by-step tutorial at https://github.com/ychastagnier/BRET-Analyzer. This toolset proposes (1) image background subtraction, (2) image alignment over time, (3) a composite thresholding method of the image used as the denominator of the ratio to refine the precise limits of the sample, (4) pixel by pixel division of the images and efficient distribution of the ratio intensity on a pseudocolor scale, and (5) quantification of the ratio mean intensity and standard variation among pixels in chosen areas. In addition to systematize the analysis process, we show that the BRET-Analyzer allows proper reconstitution and quantification of the ratiometric image in time and space, even from heterogeneous subcellular volumes. Indeed, analyzing twice the same images, we demonstrate that compared to standard analysis BRET-Analyzer precisely define the luminescent specimen limits, enlightening proficient strengths from small and big ensembles over time. For example, we followed and quantified, in live, scaffold proteins interaction dynamics in neuronal sub-cellular compartments including dendritic spines, for half an hour. In conclusion, BRET-Analyzer provides a complete, versatile and efficient toolset for automated reproducible and meaningful image ratio analysis.

  9. Environmental Characterization for Target Acquisition. Report 2. Analysis of Thermal and Visible Imagery

    DTIC Science & Technology

    1993-11-01

    4 Im age M etrics .......................................... 8 Analysis Procedures .................................... 14 3...trgtI’oi4.1 top) then ter jit I" to ,amtqts -i do eno; A26 Appendx A Metices Image Processing S,)ftware Source Code AGANETRIC 4 OF 8 Vat 1.J. k., I I integer...A 4 •A--TIC - OF 8 Appendx A Wbkri Image Prooiing Software Source Code A31 AGACOMPT I OF 3

  10. Deep-towed high resolution seismic imaging II: Determination of P-wave velocity distribution

    NASA Astrophysics Data System (ADS)

    Marsset, B.; Ker, S.; Thomas, Y.; Colin, F.

    2018-02-01

    The acquisition of high resolution seismic data in deep waters requires the development of deep towed seismic sources and receivers able to deal with the high hydrostatic pressure environment. The low frequency piezoelectric transducer of the SYSIF (SYstème Sismique Fond) deep towed seismic device comply with the former requirement taking advantage of the coupling of a mechanical resonance (Janus driver) and a fluid resonance (Helmholtz cavity) to produce a large frequency bandwidth acoustic signal (220-1050 Hz). The ability to perform deep towed multichannel seismic imaging with SYSIF was demonstrated in 2014, yet, the ability to determine P-wave velocity distribution wasn't achieved. P-wave velocity analysis relies on the ratio between the source-receiver offset range and the depth of the seismic reflectors, thus towing the seismic source and receivers closer to the sea bed will provide a better geometry for P-wave velocity determination. Yet, technical issues, related to the acoustic source directivity, arise for this approach in the particular framework of piezoelectric sources. A signal processing sequence is therefore added to the initial processing flow. Data acquisition took place during the GHASS (Gas Hydrates, fluid Activities and Sediment deformations in the western Black Sea) cruise in the Romanian waters of the Black Sea. The results of the imaging processing are presented for two seismic data sets acquired over gas hydrates and gas bearing sediments. The improvement in the final seismic resolution demonstrates the validity of the velocity model.

  11. Time-stretch microscopy based on time-wavelength sequence reconstruction from wideband incoherent source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chi, E-mail: chizheung@gmail.com; Xu, Yiqing; Wei, Xiaoming

    2014-07-28

    Time-stretch microscopy has emerged as an ultrafast optical imaging concept offering the unprecedented combination of the imaging speed and sensitivity. However, dedicated wideband and coherence optical pulse source with high shot-to-shot stability has been mandated for time-wavelength mapping—the enabling process for ultrahigh speed wavelength-encoded image retrieval. From the practical point of view, exploiting methods to relax the stringent requirements (e.g., temporal stability and coherence) for the source of time-stretch microscopy is thus of great value. In this paper, we demonstrated time-stretch microscopy by reconstructing the time-wavelength mapping sequence from a wideband incoherent source. Utilizing the time-lens focusing mechanism mediated bymore » a narrow-band pulse source, this approach allows generation of a wideband incoherent source, with the spectral efficiency enhanced by a factor of 18. As a proof-of-principle demonstration, time-stretch imaging with the scan rate as high as MHz and diffraction-limited resolution is achieved based on the wideband incoherent source. We note that the concept of time-wavelength sequence reconstruction from wideband incoherent source can also be generalized to any high-speed optical real-time measurements, where wavelength is acted as the information carrier.« less

  12. Real time display Fourier-domain OCT using multi-thread parallel computing with data vectorization

    NASA Astrophysics Data System (ADS)

    Eom, Tae Joong; Kim, Hoon Seop; Kim, Chul Min; Lee, Yeung Lak; Choi, Eun-Seo

    2011-03-01

    We demonstrate a real-time display of processed OCT images using multi-thread parallel computing with a quad-core CPU of a personal computer. The data of each A-line are treated as one vector to maximize the data translation rate between the cores of the CPU and RAM stored image data. A display rate of 29.9 frames/sec for processed OCT data (4096 FFT-size x 500 A-scans) is achieved in our system using a wavelength swept source with 52-kHz swept frequency. The data processing times of the OCT image and a Doppler OCT image with a 4-time average are 23.8 msec and 91.4 msec.

  13. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  14. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array.

    PubMed

    Yan, Gang; Zhou, Li

    2018-02-21

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.

  15. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array

    PubMed Central

    Zhou, Li

    2018-01-01

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310

  16. Robust crop and weed segmentation under uncontrolled outdoor illumination.

    PubMed

    Jeon, Hong Y; Tian, Lei F; Zhu, Heping

    2011-01-01

    An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA).

  17. Imaging photonic crystals using hemispherical digital condensers and phase-recovery techniques.

    PubMed

    Alotaibi, Maged; Skinner-Ramos, Sueli; Farooq, Hira; Alharbi, Nouf; Alghasham, Hawra; de Peralta, Luis Grave

    2018-05-10

    We describe experiments where Fourier ptychographic microscopy (FPM) and dual-space microscopy (DSM) are implemented for imaging photonic crystals using a hemispherical digital condenser (HDC). Phase-recovery imaging simulations show that both techniques should be able to image photonic crystals with a period below the Rayleigh resolution limit. However, after processing the experimental images using both phase-recovery algorithms, we found that DSM can, but FPM cannot, image periodic structures with a period below the diffraction limit. We studied the origin of this apparent contradiction between simulations and experiments, and we concluded that the occurrence of unwanted reflections in the HDC is the source of the apparent failure of FPM. We thereafter solved the problem of reflections by using a single-directional illumination source and showed that FPM can image photonic crystals with a period below the Rayleigh resolution limit.

  18. Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information

    PubMed Central

    2013-01-01

    Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499

  19. Nonuniformity correction of infrared cameras by reading radiance temperatures with a spatially nonhomogeneous radiation source

    NASA Astrophysics Data System (ADS)

    Gutschwager, Berndt; Hollandt, Jörg

    2017-01-01

    We present a novel method of nonuniformity correction (NUC) of infrared cameras and focal plane arrays (FPA) in a wide optical spectral range by reading radiance temperatures and by applying a radiation source with an unknown and spatially nonhomogeneous radiance temperature distribution. The benefit of this novel method is that it works with the display and the calculation of radiance temperatures, it can be applied to radiation sources of arbitrary spatial radiance temperature distribution, and it only requires sufficient temporal stability of this distribution during the measurement process. In contrast to this method, an initially presented method described the calculation of NUC correction with the reading of monitored radiance values. Both methods are based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogeneous radiance temperature distribution and a thermal imager of a predefined nonuniform FPA responsivity is presented.

  20. Bright field segmentation tomography (BFST) for use as surface identification in stereomicroscopy

    NASA Astrophysics Data System (ADS)

    Thiesse, Jacqueline R.; Namati, Eman; de Ryk, Jessica; Hoffman, Eric A.; McLennan, Geoffrey

    2004-07-01

    Stereomicroscopy is an important method for use in image acquisition because it provides a 3D image of an object when other microscopic techniques can only provide the image in 2D. One challenge that is being faced with this type of imaging is determining the top surface of a sample that has otherwise indistinguishable surface and planar characteristics. We have developed a system that creates oblique illumination and in conjunction with image processing, the top surface can be viewed. The BFST consists of the Leica MZ12 stereomicroscope with a unique attached lighting source. The lighting source consists of eight light emitting diodes (LED's) that are separated by 45-degree angles. Each LED in this system illuminates with a 20-degree viewing angle once per cycle with a shadow over the rest of the sample. Subsequently, eight segmented images are taken per cycle. After the images are captured they are stacked through image addition to achieve the full field of view, and the surface is then easily identified. Image processing techniques, such as skeletonization can be used for further enhancement and measurement. With the use of BFST, advances can be made in detecting surface features from metals to tissue samples, such as in the analytical assessment of pulmonary emphysema using the technique of mean linear intercept.

  1. Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.

    NASA Astrophysics Data System (ADS)

    Dodd, Stirling Scott

    1995-01-01

    Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.

  2. Automated motion artifact removal for intravital microscopy, without a priori information.

    PubMed

    Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph

    2014-03-28

    Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber.

  3. Automated motion artifact removal for intravital microscopy, without a priori information

    PubMed Central

    Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph

    2014-01-01

    Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber. PMID:24676021

  4. A survey of GPU-based medical image computing techniques

    PubMed Central

    Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming

    2012-01-01

    Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080

  5. Real-time calibration-free C-scan images of the eye fundus using Master Slave swept source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Fred; Garway-Heath, David F.; Rajendram, Ranjan; Keane, Pearce; Podoleanu, Adrian G.

    2015-03-01

    Recently, we introduced a novel Optical Coherence Tomography (OCT) method, termed as Master Slave OCT (MS-OCT), specialized for delivering en-face images. This method uses principles of spectral domain interfereometry in two stages. MS-OCT operates like a time domain OCT, selecting only signals from a chosen depth only while scanning the laser beam across the eye. Time domain OCT allows real time production of an en-face image, although relatively slowly. As a major advance, the Master Slave method allows collection of signals from any number of depths, as required by the user. The tremendous advantage in terms of parallel provision of data from numerous depths could not be fully employed by using multi core processors only. The data processing required to generate images at multiple depths simultaneously is not achievable with commodity multicore processors only. We compare here the major improvement in processing and display, brought about by using graphic cards. We demonstrate images obtained with a swept source at 100 kHz (which determines an acquisition time [Ta] for a frame of 200×200 pixels2 of Ta =1.6 s). By the end of the acquired frame being scanned, using our computing capacity, 4 simultaneous en-face images could be created in T = 0.8 s. We demonstrate that by using graphic cards, 32 en-face images can be displayed in Td 0.3 s. Other faster swept source engines can be used with no difference in terms of Td. With 32 images (or more), volumes can be created for 3D display, using en-face images, as opposed to the current technology where volumes are created using cross section OCT images.

  6. Comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Sullivan, Malcolm N.; Chan, Kam Wai Clifford; Boyd, Robert W.

    2010-11-15

    We present a theoretical comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging. We first calculate the signal-to-noise ratio of each process in terms of its controllable experimental conditions. We show that a key distinction is that a thermal ghost image always resides on top of a large background; the fluctuations in this background constitutes an intrinsic noise source for thermal ghost imaging. In contrast, there is a negligible intrinsic background to a quantum ghost image. However, for practical reasons involving achievable illumination levels, acquisition times for thermal ghost images are often much shorter than those for quantummore » ghost images. We provide quantitative predictions for the conditions under which each process provides superior performance. Our conclusion is that each process can provide useful functionality, although under complementary conditions.« less

  7. PI2GIS: processing image to geographical information systems, a learning tool for QGIS

    NASA Astrophysics Data System (ADS)

    Correia, R.; Teodoro, A.; Duarte, L.

    2017-10-01

    To perform an accurate interpretation of remote sensing images, it is necessary to extract information using different image processing techniques. Nowadays, it became usual to use image processing plugins to add new capabilities/functionalities integrated in Geographical Information System (GIS) software. The aim of this work was to develop an open source application to automatically process and classify remote sensing images from a set of satellite input data. The application was integrated in a GIS software (QGIS), automating several image processing steps. The use of QGIS for this purpose is justified since it is easy and quick to develop new plugins, using Python language. This plugin is inspired in the Semi-Automatic Classification Plugin (SCP) developed by Luca Congedo. SCP allows the supervised classification of remote sensing images, the calculation of vegetation indices such as NDVI (Normalized Difference Vegetation Index) and EVI (Enhanced Vegetation Index) and other image processing operations. When analysing SCP, it was realized that a set of operations, that are very useful in teaching classes of remote sensing and image processing tasks, were lacking, such as the visualization of histograms, the application of filters, different image corrections, unsupervised classification and several environmental indices computation. The new set of operations included in the PI2GIS plugin can be divided into three groups: pre-processing, processing, and classification procedures. The application was tested consider an image from Landsat 8 OLI from a North area of Portugal.

  8. Social Cognitive Perspectives on the Development of Body Image in Adult Women

    ERIC Educational Resources Information Center

    Petersen, Kristin Lee

    2017-01-01

    The purpose of this study was to explore women's perceptions of the sources of information that led to the development of their body image. The problem this study addressed was women receive messages that contribute to negative body image throughout their lifetimes. When these messages are processed passively, especially when cultivated over time,…

  9. Morphological-transformation-based technique of edge detection and skeletonization of an image using a single spatial light modulator

    NASA Astrophysics Data System (ADS)

    Munshi, Soumika; Datta, A. K.

    2003-03-01

    A technique of optically detecting the edge and skeleton of an image by defining shift operations for morphological transformation is described. A (2 × 2) source array, which acts as the structuring element of morphological operations, casts four angularly shifted optical projections of the input image. The resulting dilated image, when superimposed with the complementary input image, produces the edge image. For skeletonization, the source array casts four partially overlapped output images of the inverted input image, which is negated, and the resultant image is recorded in a CCD camera. This overlapped eroded image is again eroded and then dilated, producing an opened image. The difference between the eroded and opened image is then computed, resulting in a thinner image. This procedure of obtaining a thinned image is iterated until the difference image becomes zero, maintaining the connectivity conditions. The technique has been optically implemented using a single spatial modulator and has the advantage of single-instruction parallel processing of the image. The techniques have been tested both for binary and grey images.

  10. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  11. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  12. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  13. A back-illuminated megapixel CMOS image sensor

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Cunningham, Thomas; Nikzad, Shouleh; Hoenk, Michael; Jones, Todd; Wrigley, Chris; Hancock, Bruce

    2005-01-01

    In this paper, we present the test and characterization results for a back-illuminated megapixel CMOS imager. The imager pixel consists of a standard junction photodiode coupled to a three transistor-per-pixel switched source-follower readout [1]. The imager also consists of integrated timing and control and bias generation circuits, and provides analog output. The analog column-scan circuits were implemented in such a way that the imager could be configured to run in off-chip correlated double-sampling (CDS) mode. The imager was originally designed for normal front-illuminated operation, and was fabricated in a commercially available 0.5 pn triple-metal CMOS-imager compatible process. For backside illumination, the imager was thinned by etching away the substrate was etched away in a post-fabrication processing step.

  14. Edge enhancement of color images using a digital micromirror device.

    PubMed

    Di Martino, J Matías; Flores, Jorge L; Ayubi, Gastón A; Alonso, Julia R; Fernández, Ariel; Ferrari, José A

    2012-06-01

    A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.

  15. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.

  16. Stellar Imager

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth

    2007-01-01

    The Stellar Imager (SI) is one of NASA's "Vision Missions" - concepts for future, space-based, strategic missions that could enormously increase our capabilities for observing the Cosmos. SI is designed as a UV/Optical Interferometer which will enable 0.1 milli-arcsecond (mas) spectral imaging of stellar surfaces and, via asteroseismology, stellar interiors and of the Universe in general. The ultra-sharp images of the Stellar Imager will revolutionize our view of many dynamic astrophysical processes by transforming point sources into extended sources, and snapshots into evolving views. SI, with a characteristic angular resolution of 0.1 milli-arcseconds at 2000 Angstroms, represents an advance in image detail of several hundred times over that provided by the Hubble Space Telescope. The Stellar Imager will zoom in on what today-with few exceptions - we only know as point sources, revealing processes never before seen, thus providing a tool as fundamental to astrophysics as the microscope is to the study of life on Earth. SI's science focuses on the role of magnetism in the Universe, particularly on magnetic activity on the surfaces of stars like the Sun. It's prime goal is to enable long-term forecasting of solar activity and the space weather that it drives, in support of the Living With a Star program in the Exploration Era. SI will also revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magneto-hydrodynamically controlled processes in the Universe. Stellar Imager is included as a "Flagship and Landmark Discovery Mission" in the 2005 Sun Solar System Connection (SSSC) Roadmap and as a candidate for a "Pathways to Life Observatory" in the Exploration of the Universe Division (EUD) Roadmap (May, 2005) and as such is a candidate mission for the 2025-2030 timeframe. An artist's drawing of the current "baseline" concept for SI is presented.

  17. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  18. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    PubMed

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of the point source study showed that the three SR images exhibited similar signal amplitudes and FWHM. The NEMA/IEC study showed that the average difference in SNR among the three SR images was 2.1% with respect to one another and they contained similar noise structure. ISR-1 and ISR-2 can be used to replace CSR, thereby reducing the total SR processing time and memory storage while maintaining similar contrast, resolution, SNR, and noise structure.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gwyn, Stephen D. J., E-mail: Stephen.Gwyn@nrc-cnrc.gc.ca

    This paper describes the image stacks and catalogs of the Canada-France-Hawaii Telescope Legacy Survey produced using the MegaPipe data pipeline at the Canadian Astronomy Data Centre. The Legacy Survey is divided into two parts. The Deep Survey consists of four fields each of 1 deg{sup 2}, with magnitude limits (50% completeness for point sources) of u = 27.5, g = 27.9, r = 27.7, i = 27.4, and z = 26.2. It contains 1.6 Multiplication-Sign 10{sup 6} sources. The Wide Survey consists of 150 deg{sup 2} split over four fields, with magnitude limits of u = 26.0, g = 26.5,more » r = 25.9, i = 25.7, and z = 24.6. It contains 3 Multiplication-Sign 10{sup 7} sources. This paper describes the calibration, image stacking, and catalog generation process. The images and catalogs are available on the web through several interfaces: normal image and text file catalog downloads, a 'Google Sky' interface, an image cutout service, and a catalog database query service.« less

  20. Increasing the dynamic range of CMOS photodiode imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce R. (Inventor)

    2007-01-01

    A multiple-step reset process and circuit for resetting a voltage stored on a photodiode of an imaging device. A first stage of the reset occurs while a source and a drain of a pixel source-follower transistor are held at ground potential and the photodiode and a gate of the pixel source-follower transistor are charged to an initial reset voltage having potential less that of a supply voltage. A second stage of the reset occurs after the initial reset voltage is stored on the photodiode and the gate of the pixel source-follower transistor and the source and drain voltages of the pixel source-follower transistor are released from ground potential thereby allowing the source and drain voltages of the pixel source-follower transistor to assume ordinary values above ground potential and resulting in a capacitive feed-through effect that increases the voltage on the photodiode to a value greater than the initial reset voltage.

  1. Development and validation of an open source quantification tool for DSC-MRI studies.

    PubMed

    Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J

    2015-03-01

    This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Influential sources affecting Bangkok adolescent body image perceptions.

    PubMed

    Thianthai, Chulanee

    2006-01-01

    The study of body image-related problems in non-Western countries is still very limited. Thus, this study aims to identify the main influential sources and show how they affect the body image perceptions of Bangkok adolescents. The researcher recruited 400 Thai male and female adolescents in Bangkok, attending high school to freshmen level, ranging from 16-19 years, to participate in this study. Survey questionnaires were distributed to every student and follow-up interviews conducted with 40 students. The findings showed that there are eight main influential sources respectively ranked from the most influential to the least influential: magazines, television, peer group, familial, fashion trend, the opposite gender, self-realization and health knowledge. Similar to those studies conducted in Western countries, more than half of the total percentage was the influence of mass media and peer groups. Bangkok adolescents also internalized Western ideal beauty through these mass media channels. Alike studies conducted in the West, there was similarities in the process of how these influential sources affect Bangkok adolescent body image perception, with the exception of familial source. In conclusion, taking the approach of identifying the main influential sources and understanding how they affect adolescent body image perceptions can help prevent adolescents from having unhealthy views and taking risky measures toward their bodies. More studies conducted in non-Western countries are needed in order to build a cultural sensitive program, catered to the body image problems occurring in adolescents within that particular society.

  3. A midas plugin to enable construction of reproducible web-based image processing pipelines

    PubMed Central

    Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A.; Oguz, Ipek

    2013-01-01

    Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline. PMID:24416016

  4. A midas plugin to enable construction of reproducible web-based image processing pipelines.

    PubMed

    Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A; Oguz, Ipek

    2013-01-01

    Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  5. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  6. syris: a flexible and efficient framework for X-ray imaging experiments simulation.

    PubMed

    Faragó, Tomáš; Mikulík, Petr; Ershov, Alexey; Vogelgesang, Matthias; Hänschke, Daniel; Baumbach, Tilo

    2017-11-01

    An open-source framework for conducting a broad range of virtual X-ray imaging experiments, syris, is presented. The simulated wavefield created by a source propagates through an arbitrary number of objects until it reaches a detector. The objects in the light path and the source are time-dependent, which enables simulations of dynamic experiments, e.g. four-dimensional time-resolved tomography and laminography. The high-level interface of syris is written in Python and its modularity makes the framework very flexible. The computationally demanding parts behind this interface are implemented in OpenCL, which enables fast calculations on modern graphics processing units. The combination of flexibility and speed opens new possibilities for studying novel imaging methods and systematic search of optimal combinations of measurement conditions and data processing parameters. This can help to increase the success rates and efficiency of valuable synchrotron beam time. To demonstrate the capabilities of the framework, various experiments have been simulated and compared with real data. To show the use case of measurement and data processing parameter optimization based on simulation, a virtual counterpart of a high-speed radiography experiment was created and the simulated data were used to select a suitable motion estimation algorithm; one of its parameters was optimized in order to achieve the best motion estimation accuracy when applied on the real data. syris was also used to simulate tomographic data sets under various imaging conditions which impact the tomographic reconstruction accuracy, and it is shown how the accuracy may guide the selection of imaging conditions for particular use cases.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almeida, G. L.; Silvani, M. I.; Lopes, R. T.

    Two main parameters rule the performance of an Image Acquisition System, namely, spatial resolution and contrast. For radiographic systems using cone beam arrangements, the farther the source, the better the resolution, but the contrast would diminish due to the lower statistics. A closer source would yield a higher contrast but it would no longer reproduce the attenuation map of the object, as the incoming beam flux would be reduced by unequal large divergences and attenuation factors. This work proposes a procedure to correct these effects when the object is comprised of a hull - or encased in it - possessingmore » a shape capable to be described in analytical geometry terms. Such a description allows the construction of a matrix containing the attenuation factors undergone by the beam from the source until its final destination at each coordinate on the 2D detector. Each matrix element incorporates the attenuation suffered by the beam after its travel through the hull wall, as well as its reduction due to the square of distance to the source and the angle it hits the detector surface. When the pixel intensities of the original image are corrected by these factors, the image contrast, reduced by the overall attenuation in the exposure phase, are recovered, allowing one to see details otherwise concealed due to the low contrast. In order to verify the soundness of this approach, synthetic images of objects of different shapes, such as plates and tubes, incorporating defects and statistical fluctuation, have been generated, recorded for further comparison and afterwards processed to improve their contrast. The developed algorithm which, generates processes and plots the images has been written in Fortran 90 language. As the resulting final images exhibit the expected improvements, it therefore seemed worthwhile to carry out further tests with actual experimental radiographies.« less

  8. Sodium 3D COncentration MApping (COMA 3D) using 23Na and proton MRI

    NASA Astrophysics Data System (ADS)

    Truong, Milton L.; Harrington, Michael G.; Schepkin, Victor D.; Chekmenev, Eduard Y.

    2014-10-01

    Functional changes of sodium 3D MRI signals were converted into millimolar concentration changes using an open-source fully automated MATLAB toolbox. These concentration changes are visualized via 3D sodium concentration maps, and they are overlaid over conventional 3D proton images to provide high-resolution co-registration for easy correlation of functional changes to anatomical regions. Nearly 5000/h concentration maps were generated on a personal computer (ca. 2012) using 21.1 T 3D sodium MRI brain images of live rats with spatial resolution of 0.8 × 0.8 × 0.8 mm3 and imaging matrices of 60 × 60 × 60. The produced concentration maps allowed for non-invasive quantitative measurement of in vivo sodium concentration in the normal rat brain as a functional response to migraine-like conditions. The presented work can also be applied to sodium-associated changes in migraine, cancer, and other metabolic abnormalities that can be sensed by molecular imaging. The MATLAB toolbox allows for automated image analysis of the 3D images acquired on the Bruker platform and can be extended to other imaging platforms. The resulting images are presented in a form of series of 2D slices in all three dimensions in native MATLAB and PDF formats. The following is provided: (a) MATLAB source code for image processing, (b) the detailed processing procedures, (c) description of the code and all sub-routines, (d) example data sets of initial and processed data. The toolbox can be downloaded at: http://www.vuiis.vanderbilt.edu/ truongm/COMA3D/.

  9. Digital techniques for processing Landsat imagery

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1978-01-01

    An overview of the basic techniques used to process Landsat images with a digital computer, and the VICAR image processing software developed at JPL and available to users through the NASA sponsored COSMIC computer program distribution center is presented. Examples of subjective processing performed to improve the information display for the human observer, such as contrast enhancement, pseudocolor display and band rationing, and of quantitative processing using mathematical models, such as classification based on multispectral signatures of different areas within a given scene and geometric transformation of imagery into standard mapping projections are given. Examples are illustrated by Landsat scenes of the Andes mountains and Altyn-Tagh fault zone in China before and after contrast enhancement and classification of land use in Portland, Oregon. The VICAR image processing software system which consists of a language translator that simplifies execution of image processing programs and provides a general purpose format so that imagery from a variety of sources can be processed by the same basic set of general applications programs is described.

  10. Visual perception enhancement for detection of cancerous oral tissue by multi-spectral imaging

    NASA Astrophysics Data System (ADS)

    Wang, Hsiang-Chen; Tsai, Meng-Tsan; Chiang, Chun-Ping

    2013-05-01

    Color reproduction systems based on the multi-spectral imaging technique (MSI) for both directly estimating reflection spectra and direct visualization of oral tissues using various light sources are proposed. Images from three oral cancer patients were taken as the experimental samples, and spectral differences between pre-cancerous and normal oral mucosal tissues were calculated at three time points during 5-aminolevulinic acid photodynamic therapy (ALA-PDT) to analyze whether they were consistent with disease processes. To check the successful treatment of oral cancer with ALA-PDT, oral cavity images by swept source optical coherence tomography (SS-OCT) are demonstrated. This system can also reproduce images under different light sources. For pre-cancerous detection, the oral images after the second ALA-PDT are assigned as the target samples. By using RGB LEDs with various correlated color temperatures (CCTs) for color difference comparison, the light source with a CCT of about 4500 K was found to have the best ability to enhance the color difference between pre-cancerous and normal oral mucosal tissues in the oral cavity. Compared with the fluorescent lighting commonly used today, the color difference can be improved by 39.2% from 16.5270 to 23.0023. Hence, this light source and spectral analysis increase the efficiency of the medical diagnosis of oral cancer and aid patients in receiving early treatment.

  11. En face projection imaging of the human choroidal layers with tracking SLO and swept source OCT angiography methods

    NASA Astrophysics Data System (ADS)

    Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.

    2015-07-01

    We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.

  12. Software for MR image overlay guided needle insertions: the clinical translation process

    NASA Astrophysics Data System (ADS)

    Ungi, Tamas; U-Thainual, Paweena; Fritz, Jan; Iordachita, Iulian I.; Flammang, Aaron J.; Carrino, John A.; Fichtinger, Gabor

    2013-03-01

    PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.

  13. Population-based imaging biobanks as source of big data.

    PubMed

    Gatidis, Sergios; Heber, Sophia D; Storz, Corinna; Bamberg, Fabian

    2017-06-01

    Advances of computational sciences over the last decades have enabled the introduction of novel methodological approaches in biomedical research. Acquiring extensive and comprehensive data about a research subject and subsequently extracting significant information has opened new possibilities in gaining insight into biological and medical processes. This so-called big data approach has recently found entrance into medical imaging and numerous epidemiological studies have been implementing advanced imaging to identify imaging biomarkers that provide information about physiological processes, including normal development and aging but also on the development of pathological disease states. The purpose of this article is to present existing epidemiological imaging studies and to discuss opportunities, methodological and organizational aspects, and challenges that population imaging poses to the field of big data research.

  14. Alternative techniques for high-resolution spectral estimation of spectrally encoded endoscopy

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Duan, Lian; Javidi, Tara; Ellerbee, Audrey K.

    2015-09-01

    Spectrally encoded endoscopy (SEE) is a minimally invasive optical imaging modality capable of fast confocal imaging of internal tissue structures. Modern SEE systems use coherent sources to image deep within the tissue and data are processed similar to optical coherence tomography (OCT); however, standard processing of SEE data via the Fast Fourier Transform (FFT) leads to degradation of the axial resolution as the bandwidth of the source shrinks, resulting in a well-known trade-off between speed and axial resolution. Recognizing the limitation of FFT as a general spectral estimation algorithm to only take into account samples collected by the detector, in this work we investigate alternative high-resolution spectral estimation algorithms that exploit information such as sparsity and the general region position of the bulk sample to improve the axial resolution of processed SEE data. We validate the performance of these algorithms using bothMATLAB simulations and analysis of experimental results generated from a home-built OCT system to simulate an SEE system with variable scan rates. Our results open a new door towards using non-FFT algorithms to generate higher quality (i.e., higher resolution) SEE images at correspondingly fast scan rates, resulting in systems that are more accurate and more comfortable for patients due to the reduced image time.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jozsef, G

    Purpose: To build a test device for HDR afterloaders capable of checking source positions, times at positions and estimate the activity of the source. Methods: A catheter is taped on a plastic scintillation sheet. When a source travels through the catheter, the scintillator sheet lights up around the source. The sheet is monitored with a video camera, and records the movement of the light spot. The center of the spot on each image on the video provides the source location, and the time stamps of the images can provide the dwell time the source spend in each location. Finally, themore » brightness of the light spot is related to the activity of the source. A code was developed for noise removal, calibrate the scale of the image to centimeters, eliminate the distortion caused by the oblique view angle, identifying the boundaries of the light spot, transforming the image into binary and detect and calculate the source motion, positions and times. The images are much less noisy if the camera is shielded. That requires that the light spot is monitored in a mirror, rather than directly. The whole assembly is covered from external light and has a size of approximately 17×35×25cm (H×L×W) Results: A cheap camera in BW mode proved to be sufficient with a plastic scintillator sheet. The best images were resulted by a 3mm thick sheet with ZnS:Ag surface coating. The shielding of the camera decreased the noise, but could not eliminate it. A test run even in noisy condition resulted in approximately 1 mm and 1 sec difference from the planned positions and dwell times. Activity tests are in progress. Conclusion: The proposed method is feasible. It might simplify the monthly QA process of HDR Brachytherapy units.« less

  16. Multiphoton imaging with a nanosecond supercontinuum source

    NASA Astrophysics Data System (ADS)

    Lefort, Claire; O'Connor, Rodney P.; Blanquet, Véronique; Baraige, Fabienne; Tombelaine, Vincent; Lévêque, Philippe; Couderc, Vincent; Leproux, Philippe

    2016-03-01

    Multiphoton microscopy is a well-established technique for biological imaging of several kinds of targets. It is classically based on multiphoton processes allowing two means of contrast simultaneously: two-photon fluorescence (TPF) and second harmonic generation (SHG). Today, the quasi exclusive laser technology used in that aim is femtosecond titanium sapphire (Ti: Sa) laser. We experimentally demonstrate that a nanosecond supercontinuum laser source (STM-250-VIS-IR-custom, Leukos, France; 1 ns, 600-2400 nm, 250 kHz, 1 W) allows to obtain the same kind of image quality in the case of both TPF and SHG, since it is properly filtered. The first set of images concerns the muscle of a mouse. It highlights the simultaneous detection of TPF and SHG. TPF is obtained thanks to the labelling of alpha-actinin with Alexa Fluor® 546 by immunochemistry. SHG is created from the non-centrosymmetric organization of myosin. As expected, discs of actin and myosin are superimposed alternatively. The resulting images are compared with those obtained from a standard femtosecond Ti: Sa source. The physical parameters of the supercontinuum are discussed. Finally, all the interest of using an ultra-broadband source is presented with images obtained in vivo on the brain of a mouse where tumor cells labeled with eGFP are grafted. Texas Red® conjugating Dextran is injected into the blood vessels network. Thus, two fluorophores having absorption wavelengths separated by 80 nm are imaged simultaneously with a single laser source.

  17. Optical coherence tomography imaging based on non-harmonic analysis

    NASA Astrophysics Data System (ADS)

    Cao, Xu; Hirobayashi, Shigeki; Chong, Changho; Morosawa, Atsushi; Totsuka, Koki; Suzuki, Takuya

    2009-11-01

    A new processing technique called Non-Harmonic Analysis (NHA) is proposed for OCT imaging. Conventional Fourier-Domain OCT relies on the FFT calculation which depends on the window function and length. Axial resolution is counter proportional to the frame length of FFT that is limited by the swept range of the swept source in SS-OCT, or the pixel counts of CCD in SD-OCT degraded in FD-OCT. However, NHA process is intrinsically free from this trade-offs; NHA can resolve high frequency without being influenced by window function or frame length of sampled data. In this study, NHA process is explained and applied to OCT imaging and compared with OCT images based on FFT. In order to validate the benefit of NHA in OCT, we carried out OCT imaging based on NHA with the three different sample of onion-skin,human-skin and pig-eye. The results show that NHA process can realize practical image resolution that is equivalent to 100nm swept range only with less than half-reduced wavelength range.

  18. The Dark Energy Survey Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Morganson, E.; Gruendl, R. A.; Menanteau, F.; Carrasco Kind, M.; Chen, Y.-C.; Daues, G.; Drlica-Wagner, A.; Friedel, D. N.; Gower, M.; Johnson, M. W. G.; Johnson, M. D.; Kessler, R.; Paz-Chinchón, F.; Petravick, D.; Pond, C.; Yanny, B.; Allam, S.; Armstrong, R.; Barkhouse, W.; Bechtol, K.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Buckley-Geer, E.; Covarrubias, R.; Desai, S.; Diehl, H. T.; Goldstein, D. A.; Gruen, D.; Li, T. S.; Lin, H.; Marriner, J.; Mohr, J. J.; Neilsen, E.; Ngeow, C.-C.; Paech, K.; Rykoff, E. S.; Sako, M.; Sevilla-Noarbe, I.; Sheldon, E.; Sobreira, F.; Tucker, D. L.; Wester, W.; DES Collaboration

    2018-07-01

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a ∼5000 deg2 survey of the southern sky in five optical bands (g, r, i, z, Y) to a depth of ∼24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g, r, i, z) over ∼27 deg2. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  19. Client-side Medical Image Colorization in a Collaborative Environment.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2015-01-01

    The paper presents an application related to collaborative medicine using a browser based medical visualization system with focus on the medical image colorization process and the underlying open source web development technologies involved. Browser based systems allow physicians to share medical data with their remotely located counterparts or medical students, assisting them during patient diagnosis, treatment monitoring, surgery planning or for educational purposes. This approach brings forth the advantage of ubiquity. The system can be accessed from a any device, in order to process the images, assuring the independence towards having a specific proprietary operating system. The current work starts with processing of DICOM (Digital Imaging and Communications in Medicine) files and ends with the rendering of the resulting bitmap images on a HTML5 (fifth revision of the HyperText Markup Language) canvas element. The application improves the image visualization emphasizing different tissue densities.

  20. An efficient approach to integrated MeV ion imaging.

    PubMed

    Nikbakht, T; Kakuee, O; Solé, V A; Vosuoghi, Y; Lamehi-Rachti, M

    2018-03-01

    An ionoluminescence (IL) spectral imaging system, besides the common MeV ion imaging facilities such as µ-PIXE and µ-RBS, is implemented at the Van de Graaff laboratory of Tehran. A versatile processing software is required to handle the large amount of data concurrently collected in µ-IL and common MeV ion imaging measurements through the respective methodologies. The open-source freeware PyMca, with image processing and multivariate analysis capabilities, is employed to simultaneously process common MeV ion imaging and µ-IL data. Herein, the program was adapted to support the OM_DAQ listmode data format. The appropriate performance of the µ-IL data acquisition system is confirmed through a case study. Moreover, the capabilities of the software for simultaneous analysis of µ-PIXE and µ-RBS experimental data are presented. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. CMP reflection imaging via interferometry of distributed subsurface sources

    NASA Astrophysics Data System (ADS)

    Kim, D.; Brown, L. D.; Quiros, D. A.

    2015-12-01

    The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.

  2. A low-frequency near-field interferometric-TOA 3-D Lightning Mapping Array

    NASA Astrophysics Data System (ADS)

    Lyu, Fanchao; Cummer, Steven A.; Solanki, Rahulkumar; Weinert, Joel; McTague, Lindsay; Katko, Alex; Barrett, John; Zigoneanu, Lucian; Xie, Yangbo; Wang, Wenqi

    2014-11-01

    We report on the development of an easily deployable LF near-field interferometric-time of arrival (TOA) 3-D Lightning Mapping Array applied to imaging of entire lightning flashes. An interferometric cross-correlation technique is applied in our system to compute windowed two-sensor time differences with submicrosecond time resolution before TOA is used for source location. Compared to previously reported LF lightning location systems, our system captures many more LF sources. This is due mainly to the improved mapping of continuous lightning processes by using this type of hybrid interferometry/TOA processing method. We show with five station measurements that the array detects and maps different lightning processes, such as stepped and dart leaders, during both in-cloud and cloud-to-ground flashes. Lightning images mapped by our LF system are remarkably similar to those created by VHF mapping systems, which may suggest some special links between LF and VHF emission during lightning processes.

  3. Hard X-Ray Flare Source Sizes Measured with the Ramaty High Energy Solar Spectroscopic Imager

    NASA Technical Reports Server (NTRS)

    Dennis, Brian R.; Pernak, Rick L.

    2009-01-01

    Ramaty High Energy Solar Spectroscopic Imager (RHESSI) observations of 18 double hard X-ray sources seen at energies above 25 keV are analyzed to determine the spatial extent of the most compact structures evident in each case. The following four image reconstruction algorithms were used: Clean, Pixon, and two routines using visibilities maximum entropy and forward fit (VFF). All have been adapted for this study to optimize their ability to provide reliable estimates of the sizes of the more compact sources. The source fluxes, sizes, and morphologies obtained with each method are cross-correlated and the similarities and disagreements are discussed. The full width at half-maximum (FWHM) of the major axes of the sources with assumed elliptical Gaussian shapes are generally well correlated between the four image reconstruction routines and vary between the RHESSI resolution limit of approximately 2" up to approximately 20" with most below 10". The FWHM of the minor axes are generally at or just above the RHESSI limit and hence should be considered as unresolved in most cases. The orientation angles of the elliptical sources are also well correlated. These results suggest that the elongated sources are generally aligned along a flare ribbon with the minor axis perpendicular to the ribbon. This is verified for the one flare in our list with coincident Transition Region and Coronal Explorer (TRACE) images. There is evidence for significant extra flux in many of the flares in addition to the two identified compact sources, thus rendering the VFF assumption of just two Gaussians inadequate. A more realistic approximation in many cases would be of two line sources with unresolved widths. Recommendations are given for optimizing the RHESSI imaging reconstruction process to ensure that the finest possible details of the source morphology become evident and that reliable estimates can be made of the source dimensions.

  4. Image and Processing Models for Satellite Detection in Images Acquired by Space-based Surveillance-of-Space Sensors

    DTIC Science & Technology

    2010-09-01

    external sources ‘L1’ like zodiacal light (or diffuse nebula ) or stray light ‘L2’ and these components change with the telescope pointing. Bk (T,t...Astronomical scene background (zodiacal light, diffuse nebulae , etc.). L2(P A(tk), t): Image background component caused by stray light. MS

  5. Aging, culture, and memory for categorically processed information.

    PubMed

    Yang, Lixia; Chen, Wenfeng; Ng, Andy H; Fu, Xiaolan

    2013-11-01

    Literature on cross-cultural differences in cognition suggests that categorization, as an information processing and organization strategy, was more often used by Westerners than by East Asians, particularly for older adults. This study examines East-West cultural differences in memory for categorically processed items and sources in young and older Canadians and native Chinese with a conceptual source memory task (Experiment 1) and a reality monitoring task (Experiment 2). In Experiment 1, participants encoded photographic faces of their own ethnicity that were artificially categorized into GOOD or EVIL characters and then completed a source memory task in which they identified faces as old-GOOD, old-EVIL, or new. In Experiment 2, participants viewed a series of words, each followed either by a corresponding image (i.e., SEEN) or by a blank square within which they imagined an image for the word (i.e., IMAGINED). At test, they decided whether the test words were old-SEEN, old-IMAGINED, or new. In general, Canadians outperformed Chinese in memory for categorically processed information, an effect more pronounced for older than for young adults. Extensive exercise of culturally preferred categorization strategy differentially benefits Canadians and reduces their age group differences in memory for categorically processed information.

  6. An automated multi-scale network-based scheme for detection and location of seismic sources

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.

    2017-12-01

    We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.

  7. Preparation of a porous Sn@C nanocomposite as a high-performance anode material for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Jiang, Li; Wang, Chunru

    2015-07-01

    A porous Sn@C nanocomposite was prepared via a facile hydrothermal method combined with a simple post-calcination process, using stannous octoate as the Sn source and glucose as the C source. The as-prepared Sn@C nanocomposite exhibited excellent electrochemical behavior with a high reversible capacity, long cycle life and good rate capability when used as an anode material for lithium ion batteries.A porous Sn@C nanocomposite was prepared via a facile hydrothermal method combined with a simple post-calcination process, using stannous octoate as the Sn source and glucose as the C source. The as-prepared Sn@C nanocomposite exhibited excellent electrochemical behavior with a high reversible capacity, long cycle life and good rate capability when used as an anode material for lithium ion batteries. Electronic supplementary information (ESI) available: Detailed experimental procedure and additional characterization, including a Raman spectrum, TGA curve, N2 adsorption-desorption isotherm, TEM images and SEM images. See DOI: 10.1039/c5nr03093e

  8. Toward Imaging of Small Objects with XUV Radiation

    NASA Astrophysics Data System (ADS)

    Sayrac, Muhammed; Kolomenski, Alexandre A.; Boran, Yakup; Schuessler, Hans

    The coherent diffraction imaging (CDI) technique has the potential to capture high resolution images of nano- or micron-sized structures when using XUV radiation obtained by high harmonic radiation (HHG) process. When a small object is exposed to XUV radiation, a diffraction pattern of the object is created. The advances in the coherent HHG enable obtaining photon flux sufficient for XUV imaging. The diffractive imaging technique from coherent table top XUV beams have made possible nanometer-scale resolution imaging by replacing the imaging optics with a computer reconstruction algorithm. In this study, we present our initial work on diffractive imaging using a tabletop XUV source. The initial investigation of imaging of a micron-sized mesh with an optimized HHG source is demonstrated. This work was supported in part by the Robert A. Welch Foundation Grant No. A1546 and the Qatar Foundation under the grant NPRP 8-735-1-154. M. Sayrac acknowledges support from the Ministry of National Education of the Republic of Turkey.

  9. Technical note: DIRART--A software suite for deformable image registration and adaptive radiotherapy research.

    PubMed

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A

    2011-01-01

    Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-

  10. Attributing Methane and Carbon Dioxide Emissions from Anthropogenic and Natural Sources Using AVIRIS-NG

    NASA Astrophysics Data System (ADS)

    Thorpe, A. K.; Frankenberg, C.; Thompson, D. R.; Duren, R. M.; Aubrey, A. D.; Bue, B. D.; Green, R. O.; Gerilowski, K.; Krings, T.; Borchardt, J.; Kort, E. A.; Sweeney, C.; Conley, S. A.; Roberts, D. A.; Dennison, P. E.; Ayasse, A.

    2016-12-01

    Imaging spectrometers like the next generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG) can map large regions with the high spatial resolution necessary to resolve methane (CH4) and carbon dioxide (CO2) emissions. This capability is aided by real time detection and geolocation of gas plumes, permitting unambiguous identification of individual emission source locations and communication to ground teams for rapid follow up. We present results from AVIRIS-NG flight campaigns in the Four Corners region (Colorado and New Mexico) and the San Joaquin Valley (California). Over three hundred plumes were observed, reflecting emissions from anthropogenic and natural sources. Examples of plumes will be shown for a number of sources, including CH4 from well completions, gas processing plants, tanks, pipeline leaks, natural seeps, and CO2 from power plants. Despite these promising results, an imaging spectrometer built exclusively for quantitative mapping of gas plumes would have improved sensitivity compared to AVIRIS-NG. For example, an instrument providing a 1 nm spectral sampling (2,000-2,400 micron) would permit mapping CH4, CO2, H2O, CO, and N2O from more diffuse sources using both airborne and orbital platforms. The ability to identify emission sources offers the potential to constrain regional greenhouse gas budgets and improve partitioning between anthropogenic and natural emission sources. Because the CH4 lifetime is only about 9 years and CH4 has a Global Warming Potential 86 times that of CO2 for a 20 year time interval, mitigating these emissions is a particularly cost-effective approach to reduce overall atmospheric radiative forcing. Fig. 1. True color image subset with superimposed gas plumes showing concentrations in ppmm. Left: AVIRIS-NG observed CH4 plumes from natural gas processing plant extending over 500 m downwind of multiple emissions sources. Right: Multiple CO2 plumes observed from coal-fired power plant.

  11. Robust Crop and Weed Segmentation under Uncontrolled Outdoor Illumination

    PubMed Central

    Jeon, Hong Y.; Tian, Lei F.; Zhu, Heping

    2011-01-01

    An image processing algorithm for detecting individual weeds was developed and evaluated. Weed detection processes included were normalized excessive green conversion, statistical threshold value estimation, adaptive image segmentation, median filter, morphological feature calculation and Artificial Neural Network (ANN). The developed algorithm was validated for its ability to identify and detect weeds and crop plants under uncontrolled outdoor illuminations. A machine vision implementing field robot captured field images under outdoor illuminations and the image processing algorithm automatically processed them without manual adjustment. The errors of the algorithm, when processing 666 field images, ranged from 2.1 to 2.9%. The ANN correctly detected 72.6% of crop plants from the identified plants, and considered the rest as weeds. However, the ANN identification rates for crop plants were improved up to 95.1% by addressing the error sources in the algorithm. The developed weed detection and image processing algorithm provides a novel method to identify plants against soil background under the uncontrolled outdoor illuminations, and to differentiate weeds from crop plants. Thus, the proposed new machine vision and processing algorithm may be useful for outdoor applications including plant specific direct applications (PSDA). PMID:22163954

  12. OsiriX: an open-source software for navigating in multidimensional DICOM images.

    PubMed

    Rosset, Antoine; Spadola, Luca; Ratib, Osman

    2004-09-01

    A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.

  13. Image sensor for testing refractive error of eyes

    NASA Astrophysics Data System (ADS)

    Li, Xiangning; Chen, Jiabi; Xu, Longyun

    2000-05-01

    It is difficult to detect ametropia and anisometropia for children. Image sensor for testing refractive error of eyes does not need the cooperation of children and can be used to do the general survey of ametropia and anisometropia for children. In our study, photographs are recorded by a CCD element in a digital form which can be directly processed by a computer. In order to process the image accurately by digital technique, formula considering the effect of extended light source and the size of lens aperture has been deduced, which is more reliable in practice. Computer simulation of the image sensing is made to verify the fineness of the results.

  14. How to COAAD Images. I. Optimal Source Detection and Photometry of Point Sources Using Ensembles of Images

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.

    2017-02-01

    Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.

  15. Quantum Theory of Superresolution for Incoherent Optical Imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Amanda M.; Daly, Don S.; Willse, Alan R.

    The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.

  17. Improvement of density resolution in short-pulse hard x-ray radiographic imaging using detector stacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borm, B.; Gärtner, F.; Khaghani, D.

    2016-09-15

    We demonstrate that stacking several imaging plates (IPs) constitutes an easy method to increase hard x-ray detection efficiency. Used to record x-ray radiographic images produced by an intense-laser driven hard x-ray backlighter source, the IP stacks resulted in a significant improvement of the radiograph density resolution. We attribute this to the higher quantum efficiency of the combined detectors, leading to a reduced photon noise. Electron-photon transport simulations of the interaction processes in the detector reproduce the observed contrast improvement. Increasing the detection efficiency to enhance radiographic imaging capabilities is equally effective as increasing the x-ray source yield, e.g., by amore » larger drive laser energy.« less

  18. Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments

    PubMed Central

    Sachs, Christian Carsten; Grünberger, Alexander; Helfrich, Stefan; Probst, Christopher; Wiechert, Wolfgang; Kohlheyer, Dietrich; Nöh, Katharina

    2016-01-01

    Background Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM) cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool. Results We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB) is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware) that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks. Conclusion Presented is the software molyso, a ready-to-use open source software (BSD-licensed) for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso. PMID:27661996

  19. Neutron imaging data processing using the Mantid framework

    NASA Astrophysics Data System (ADS)

    Pouzols, Federico M.; Draper, Nicholas; Nagella, Sri; Yang, Erica; Sajid, Ahmed; Ross, Derek; Ritchie, Brian; Hill, John; Burca, Genoveva; Minniti, Triestino; Moreton-Smith, Christopher; Kockelmann, Winfried

    2016-09-01

    Several imaging instruments are currently being constructed at neutron sources around the world. The Mantid software project provides an extensible framework that supports high-performance computing for data manipulation, analysis and visualisation of scientific data. At ISIS, IMAT (Imaging and Materials Science & Engineering) will offer unique time-of-flight neutron imaging techniques which impose several software requirements to control the data reduction and analysis. Here we outline the extensions currently being added to Mantid to provide specific support for neutron imaging requirements.

  20. From synchrotron radiation to lab source: advanced speckle-based X-ray imaging using abrasive paper

    NASA Astrophysics Data System (ADS)

    Wang, Hongchang; Kashyap, Yogesh; Sawhney, Kawal

    2016-02-01

    X-ray phase and dark-field imaging techniques provide complementary and inaccessible information compared to conventional X-ray absorption or visible light imaging. However, such methods typically require sophisticated experimental apparatus or X-ray beams with specific properties. Recently, an X-ray speckle-based technique has shown great potential for X-ray phase and dark-field imaging using a simple experimental arrangement. However, it still suffers from either poor resolution or the time consuming process of collecting a large number of images. To overcome these limitations, in this report we demonstrate that absorption, dark-field, phase contrast, and two orthogonal differential phase contrast images can simultaneously be generated by scanning a piece of abrasive paper in only one direction. We propose a novel theoretical approach to quantitatively extract the above five images by utilising the remarkable properties of speckles. Importantly, the technique has been extended from a synchrotron light source to utilise a lab-based microfocus X-ray source and flat panel detector. Removing the need to raster the optics in two directions significantly reduces the acquisition time and absorbed dose, which can be of vital importance for many biological samples. This new imaging method could potentially provide a breakthrough for numerous practical imaging applications in biomedical research and materials science.

  1. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  2. An update on carbon nanotube-enabled X-ray sources for biomedical imaging.

    PubMed

    Puett, Connor; Inscoe, Christina; Hartman, Allison; Calliste, Jabari; Franceschi, Dora K; Lu, Jianping; Zhou, Otto; Lee, Yueh Z

    2018-01-01

    A new imaging technology has emerged that uses carbon nanotubes (CNT) as the electron emitter (cathode) for the X-ray tube. Since the performance of the CNT cathode is controlled by simple voltage manipulation, CNT-enabled X-ray sources are ideal for the repetitive imaging steps needed to capture three-dimensional information. As such, they have allowed the development of a gated micro-computed tomography (CT) scanner for small animal research as well as stationary tomosynthesis, an experimental technology for large field-of-view human imaging. The small animal CT can acquire images at specific points in the respiratory and cardiac cycles. Longitudinal imaging therefore becomes possible and has been applied to many research questions, ranging from tumor response to the noninvasive assessment of cardiac output. Digital tomosynthesis (DT) is a low-dose and low-cost human imaging tool that captures some depth information. Known as three-dimensional mammography, DT is now used clinically for breast imaging. However, the resolution of currently-approved DT is limited by the need to swing the X-ray source through space to collect a series of projection views. An array of fixed and distributed CNT-enabled sources provides the solution and has been used to construct stationary DT devices for breast, lung, and dental imaging. To date, over 100 patients have been imaged on Institutional Review Board-approved study protocols. Early experience is promising, showing an excellent conspicuity of soft-tissue features, while also highlighting technical and post-acquisition processing limitations that are guiding continued research and development. Additionally, CNT-enabled sources are being tested in miniature X-ray tubes that are capable of generating adequate photon energies and tube currents for clinical imaging. Although there are many potential applications for these small field-of-view devices, initial experience has been with an X-ray source that can be inserted into the mouth for dental imaging. Conceived less than 20 years ago, CNT-enabled X-ray sources are now being manufactured on a commercial scale and are powering both research tools and experimental human imaging devices. WIREs Nanomed Nanobiotechnol 2018, 10:e1475. doi: 10.1002/wnan.1475 This article is categorized under: Diagnostic Tools > Diagnostic Nanodevices Diagnostic Tools > In Vivo Nanodiagnostics and Imaging. © 2017 Wiley Periodicals, Inc.

  3. Three-dimensional digital projection in neurosurgical education: technical note.

    PubMed

    Martins, Carolina; Ribas, Eduardo Carvalhal; Rhoton, Albert L; Ribas, Guilherme Carvalhal

    2015-10-01

    Three-dimensional images have become an important tool in teaching surgical anatomy, and its didactic power is enhanced when combined with 3D surgical images and videos. This paper describes the method used by the last author (G.C.R.) since 2002 to project 3D anatomical and surgical images using a computer source. Projecting 3D images requires the superposition of 2 similar but slightly different images of the same object. The set of images, one mimicking the view of the left eye and the other mimicking the view of the right eye, constitute the stereoscopic pair and can be processed using anaglyphic or horizontal-vertical polarization of light for individual use or presentation to larger audiences. Classically, 3D projection could be obtained by using a double set of slides, projected through 2 slide projectors, each of them equipped with complementary filters, shooting over a medium that keeps light polarized (a silver screen) and having the audience wear appropriate glasses. More recently, a digital method of 3D projection has been perfected. In this method, a personal computer is used as the source of the images, which are arranged in a Microsoft PowerPoint presentation. A beam splitter device is used to connect the computer source to 2 digital, portable projectors. Filters, a silver screen, and glasses are used, similar to the classic method. Among other advantages, this method brings flexibility to 3D presentations by allowing the combination of 3D anatomical and surgical still images and videos. It eliminates the need for using film and film developing, lowering the costs of the process. In using small, powerful digital projectors, this method substitutes for the previous technology, without incurring a loss of quality, and enhances portability.

  4. Digital photorefraction

    NASA Astrophysics Data System (ADS)

    Costa, Manuel F. M.; Jorge, Jorge M.

    1998-01-01

    The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.

  5. Digital photorefraction

    NASA Astrophysics Data System (ADS)

    Costa, Manuel F.; Jorge, Jorge M.

    1997-12-01

    The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.

  6. "Proximal Sensing" capabilities for snow cover monitoring

    NASA Astrophysics Data System (ADS)

    Valt, Mauro; Salvatori, Rosamaria; Plini, Paolo; Salzano, Roberto; Giusti, Marco; Montagnoli, Mauro; Sigismondi, Daniele; Cagnati, Anselmo

    2013-04-01

    The seasonal snow cover represents one of the most important land cover class in relation to environmental studies in mountain areas, especially considering its variation during time. Snow cover and its extension play a relevant role for the studies on the atmospheric dynamics and the evolution of climate. It is also important for the analysis and management of water resources and for the management of touristic activities in mountain areas. Recently, webcam images collected at daily or even hourly intervals are being used as tools to observe the snow covered areas; those images, properly processed, can be considered a very important environmental data source. Images captured by digital cameras become a useful tool at local scale providing images even when the cloud coverage makes impossible the observation by satellite sensors. When suitably processed these images can be used for scientific purposes, having a good resolution (at least 800x600x16 million colours) and a very good sampling frequency (hourly images taken through the whole year). Once stored in databases, those images represent therefore an important source of information for the study of recent climatic changes, to evaluate the available water resources and to analyse the daily surface evolution of the snow cover. The Snow-noSnow software has been specifically designed to automatically detect the extension of snow cover collected from webcam images with a very limited human intervention. The software was tested on images collected on Alps (ARPAV webcam network) and on Apennine in a pilot station properly equipped for this project by CNR-IIA. The results obtained through the use of Snow-noSnow are comparable to the one achieved by photo-interpretation and could be considered as better as the ones obtained using the image segmentation routine implemented into image processing commercial softwares. Additionally, Snow-noSnow operates in a semi-automatic way and has a reduced processing time. The analysis of this kind of images could represent an useful element to support the interpretation of remote sensing images, especially those provided by high spatial resolution sensors. Keywords: snow cover monitoring, digital images, software, Alps, Apennines.

  7. Integration of Landsat, Seasat, and other geo-data sources

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Blackwell, R. J.; Stromberg, W. D.

    1979-01-01

    The paper discusses integration of Landsat, Seasat, and other geographic information sources. Mosaicking of radar data and registration of radar to Landsat digital imagery are described, and six types of geophysical data, including gravity and magnetic measurements, are integrated and analyzed using image processing techniques.

  8. [Applicability of non-invasive imaging methods in forensic medicine and forensic anthropology in particular].

    PubMed

    Marcinková, Mária; Straka, Ľubomír; Novomeský, František; Janík, Martin; Štuller, František; Krajčovič, Jozef

    2018-01-01

    Massive progress in developing even more precise imaging modalities influenced all medical branches including the forensic medicine. In forensic anthropology, an inevitable part of forensic medicine itself, the use of all imaging modalities becomes even more important. Despite of acquiring more accurate informations about the deceased, all of them can be used in the process of identification and/or age estimation. X - ray imaging is most commonly used in detecting foreign bodies or various pathological changes of the deceased. Computed tomography, on the other hand, can be very helpful in the process of identification, whereas outcomes of this examination can be used for virtual reconstruction of living objects. Magnetic resonance imaging offers new opportunities in detecting cardiovascular pathological processes or develompental anomalies. Ultrasonography provides promising results in age estimation of living subjects without excessive doses of radiation. Processing the latest information sources available, authors introduce the application examples of X - ray imaging, computed tomography, magnetic resonance imaging and ultrasonography in everyday forensic medicine routine, with particular focusing on forensic anthropology.

  9. Implementation and Testing of Low Cost Uav Platform for Orthophoto Imaging

    NASA Astrophysics Data System (ADS)

    Brucas, D.; Suziedelyte-Visockiene, J.; Ragauskas, U.; Berteska, E.; Rudinskas, D.

    2013-08-01

    Implementation of Unmanned Aerial Vehicles for civilian applications is rapidly increasing. Technologies which were expensive and available only for military use have recently spread on civilian market. There is a vast number of low cost open source components and systems for implementation on UAVs available. Using of low cost hobby and open source components ensures considerable decrease of UAV price, though in some cases compromising its reliability. In Space Science and Technology Institute (SSTI) in collaboration with Vilnius Gediminas Technical University (VGTU) researches have been performed in field of constructing and implementation of small UAVs composed of low cost open source components (and own developments). Most obvious and simple implementation of such UAVs - orthophoto imaging with data download and processing after the flight. The construction, implementation of UAVs, flight experience, data processing and data implementation will be further covered in the paper and presentation.

  10. Source detection in astronomical images by Bayesian model comparison

    NASA Astrophysics Data System (ADS)

    Frean, Marcus; Friedlander, Anna; Johnston-Hollitt, Melanie; Hollitt, Christopher

    2014-12-01

    The next generation of radio telescopes will generate exabytes of data on hundreds of millions of objects, making automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are faint, diffuse objects embedded in noise. There is a pressing need for source finding software that identifies these sources, involves little manual tuning, yet is tractable to calculate. We first give a novel image discretisation method that incorporates uncertainty about how an image should be discretised. We then propose a hierarchical prior for astronomical images, which leads to a Bayes factor indicating how well a given region conforms to a model of source that is exceptionally unconstrained, compared to a model of background. This enables the efficient localisation of regions that are "suspiciously different" from the background distribution, so our method looks not for brightness but for anomalous distributions of intensity, which is much more general. The model of background can be iteratively improved by removing the influence on it of sources as they are discovered. The approach is evaluated by identifying sources in real and simulated data, and performs well on these measures: the Bayes factor is maximized at most real objects, while returning only a moderate number of false positives. In comparison to a catalogue constructed by widely-used source detection software with manual post-processing by an astronomer, our method found a number of dim sources that were missing from the "ground truth" catalogue.

  11. Hyperspectral reflectance and fluorescence line-scan imaging system for online detection of fecal contamination on apples

    NASA Astrophysics Data System (ADS)

    Kim, Moon S.; Cho, Byoung-Kwan; Yang, Chun-Chieh; Chao, Kaunglin; Lefcourt, Alan M.; Chen, Yud-Ren

    2006-10-01

    We have developed nondestructive opto-electronic imaging techniques for rapid assessment of safety and wholesomeness of foods. A recently developed fast hyperspectral line-scan imaging system integrated with a commercial apple-sorting machine was evaluated for rapid detection of animal feces matter on apples. Apples obtained from a local orchard were artificially contaminated with cow feces. For the online trial, hyperspectral images with 60 spectral channels, reflectance in the visible to near infrared regions and fluorescence emissions with UV-A excitation, were acquired from apples moving at a processing sorting-line speed of three apples per second. Reflectance and fluorescence imaging required a passive light source, and each method used independent continuous wave (CW) light sources. In this paper, integration of the hyperspectral imaging system with the commercial applesorting machine and preliminary results for detection of fecal contamination on apples, mainly based on the fluorescence method, are presented.

  12. Evaluation of a novel laparoscopic camera for characterization of renal ischemia in a porcine model using digital light processing (DLP) hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.

    2012-03-01

    Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.

  13. Optical correlators for automated rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1991-01-01

    The paper begins with a description of optical correlation. In this process, the propagation physics of coherent light is used to process images and extract information. The processed image is operated on as an area, rather than as a collection of points. An essentially instantaneous convolution is performed on that image to provide the sensory data. In this process, an image is sensed and encoded onto a coherent wavefront, and the propagation is arranged to create a bright spot of the image to match a model of the desired object. The brightness of the spot provides an indication of the degree of resemblance of the viewed image to the mode, and the location of the bright spot provides pointing information. The process can be utilized for AR&C to achieve the capability to identify objects among known reference types, estimate the object's location and orientation, and interact with the control system. System characteristics (speed, robustness, accuracy, small form factors) are adequate to meet most requirements. The correlator exploits the fact that Bosons and Fermions pass through each other. Since the image source is input as an electronic data set, conventional imagers can be used. In systems where the image is input directly, the correlating element must be at the sensing location.

  14. Image tools for UNIX

    NASA Technical Reports Server (NTRS)

    Banks, David C.

    1994-01-01

    This talk features two simple and useful tools for digital image processing in the UNIX environment. They are xv and pbmplus. The xv image viewer which runs under the X window system reads images in a number of different file formats and writes them out in different formats. The view area supports a pop-up control panel. The 'algorithms' menu lets you blur an image. The xv control panel also activates the color editor which displays the image's color map (if one exists). The xv image viewer is available through the internet. The pbmplus package is a set of tools designed to perform image processing from within a UNIX shell. The acronym 'pbm' stands for portable bit map. Like xv, the pbm plus tool can convert images from and to many different file formats. The source code and manual pages for pbmplus are also available through the internet. This software is in the public domain.

  15. Identification and Classification of Infrared Excess Sources in the Spitzer Enhanced Imaging Products (SEIP) Catalog

    NASA Astrophysics Data System (ADS)

    Strasburger, David; Gorjian, Varoujan; Burke, Todd; Childs, Linda; Odden, Caroline; Tambara, Kevin; Abate, Antoinette; Akhtar, Nadir; Beach, Skyler; Bhojwani, Ishaan; Brown, Caden; Dear, AnnaMaria; Dumont, Theodore; Harden, Olivia; Joli-Coeur, Laurent; Nahirny, Rachel; Nakahira, Andie; Nix, Sabine; Orgul, Sarp; Parry, Johnny; Picken, John; Taylor, Isabel; Toner, Emre; Turner, Aspen; Xu, Jessica; Zhu, Emily

    2015-01-01

    The Spitzer Space Telescope's original cryogenic mission imaged roughly 42 million sources, most of which were incidental and never specifically targeted for research. These have now been compiled in the publicly accessible Spitzer Enhanced Imaging Products (SEIP) catalog. The SEIP stores millions of never before examined sources that happened to be in the same field of view as objects specifically selected for study. This project examined the catalog to isolate previously unknown infrared excess (IRXS) candidates. The culling process utilized four steps. First, we considered only those objects with signal to noise ratios of at least 10 to 1 in the following five wavelengths: 3.6, 4.5, 5.8, 8 and 24 microns, which narrowed the source list to about one million. Second, objects were removed from highly studied regions, such as the galactic plane and previously conducted infrared surveys. This further reduced the population of sources to 283,758. Third, the remaining sources were plotted using a [3.6]-[4.5] vs. [8]-[24] color-color diagram to isolate IRXS candidates. Fourth, multiple images of sixty-three outlier points from the extrema of the color-color diagram were examined to verify that the sources had been cross matched correctly and to exclude any candidate sources that may have been compromised due to image artifacts or field crowding. The team will ultimately provide statistics for the prevalence of IRXS sources in the SEIP catalog and provide analysis of those extreme outliers from the main locus of points. This research was made possible through the NASA/IPAC Teacher Archive Research Program (NITARP) and was funded by NASA Astrophysics Data Program.

  16. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    PubMed Central

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  17. Removing the depth-degeneracy in optical frequency domain imaging with frequency shifting

    PubMed Central

    Yun, S. H.; Tearney, G. J.; de Boer, J. F.; Bouma, B. E.

    2009-01-01

    A novel technique using an acousto-optic frequency shifter in optical frequency domain imaging (OFDI) is presented. The frequency shift eliminates the ambiguity between positive and negative differential delays, effectively doubling the interferometric ranging depth while avoiding image cross-talk. A signal processing algorithm is demonstrated to accommodate nonlinearity in the tuning slope of the wavelength-swept OFDI laser source. PMID:19484034

  18. Determination of Hydrodynamic Parameters on Two--Phase Flow Gas - Liquid in Pipes with Different Inclination Angles Using Image Processing Algorithm

    NASA Astrophysics Data System (ADS)

    Montoya, Gustavo; Valecillos, María; Romero, Carlos; Gonzáles, Dosinda

    2009-11-01

    In the present research a digital image processing-based automated algorithm was developed in order to determine the phase's height, hold up, and statistical distribution of the drop size in a two-phase system water-air using pipes with 0 , 10 , and 90 of inclination. Digital images were acquired with a high speed camera (up to 4500fps), using an equipment that consist of a system with three acrylic pipes with diameters of 1.905, 3.175, and 4.445 cm. Each pipe is arranged in two sections of 8 m of length. Various flow patterns were visualized for different superficial velocities of water and air. Finally, using the image processing program designed in Matlab/Simulink^, the captured images were processed to establish the parameters previously mentioned. The image processing algorithm is based in the frequency domain analysis of the source pictures, which allows to find the phase as the edge between the water and air, through a Sobel filter that extracts the high frequency components of the image. The drop size was found using the calculation of the Feret diameter. Three flow patterns were observed: Annular, ST, and ST&MI.

  19. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  20. Imaging C. elegans embryos using an epifluorescent microscope and open source software.

    PubMed

    Verbrugghe, Koen J C; Chan, Raymond C

    2011-03-24

    Cellular processes, such as chromosome assembly, segregation and cytokinesis,are inherently dynamic. Time-lapse imaging of living cells, using fluorescent-labeled reporter proteins or differential interference contrast (DIC) microscopy, allows for the examination of the temporal progression of these dynamic events which is otherwise inferred from analysis of fixed samples(1,2). Moreover, the study of the developmental regulations of cellular processes necessitates conducting time-lapse experiments on an intact organism during development. The Caenorhabiditis elegans embryo is light-transparent and has a rapid, invariant developmental program with a known cell lineage(3), thus providing an ideal experiment model for studying questions in cell biology(4,5)and development(6-9). C. elegans is amendable to genetic manipulation by forward genetics (based on random mutagenesis(10,11)) and reverse genetics to target specific genes (based on RNAi-mediated interference and targeted mutagenesis(12-15)). In addition, transgenic animals can be readily created to express fluorescently tagged proteins or reporters(16,17). These traits combine to make it easy to identify the genetic pathways regulating fundamental cellular and developmental processes in vivo(18-21). In this protocol we present methods for live imaging of C. elegans embryos using DIC optics or GFP fluorescence on a compound epifluorescent microscope. We demonstrate the ease with which readily available microscopes, typically used for fixed sample imaging, can also be applied for time-lapse analysis using open-source software to automate the imaging process.

  1. MITK-OpenIGTLink for combining open-source toolkits in real-time computer-assisted interventions.

    PubMed

    Klemm, Martin; Kirchner, Thomas; Gröhl, Janek; Cheray, Dominique; Nolden, Marco; Seitel, Alexander; Hoppe, Harald; Maier-Hein, Lena; Franz, Alfred M

    2017-03-01

    Due to rapid developments in the research areas of medical imaging, medical image processing and robotics, computer-assisted interventions (CAI) are becoming an integral part of modern patient care. From a software engineering point of view, these systems are highly complex and research can benefit greatly from reusing software components. This is supported by a number of open-source toolkits for medical imaging and CAI such as the medical imaging interaction toolkit (MITK), the public software library for ultrasound imaging research (PLUS) and 3D Slicer. An independent inter-toolkit communication such as the open image-guided therapy link (OpenIGTLink) can be used to combine the advantages of these toolkits and enable an easier realization of a clinical CAI workflow. MITK-OpenIGTLink is presented as a network interface within MITK that allows easy to use, asynchronous two-way messaging between MITK and clinical devices or other toolkits. Performance and interoperability tests with MITK-OpenIGTLink were carried out considering the whole CAI workflow from data acquisition over processing to visualization. We present how MITK-OpenIGTLink can be applied in different usage scenarios. In performance tests, tracking data were transmitted with a frame rate of up to 1000 Hz and a latency of 2.81 ms. Transmission of images with typical ultrasound (US) and greyscale high-definition (HD) resolutions of [Formula: see text] and [Formula: see text] is possible at up to 512 and 128 Hz, respectively. With the integration of OpenIGTLink into MITK, this protocol is now supported by all established open-source toolkits in the field. This eases interoperability between MITK and toolkits such as PLUS or 3D Slicer and facilitates cross-toolkit research collaborations. MITK and its submodule MITK-OpenIGTLink are provided open source under a BSD-style licence ( http://mitk.org ).

  2. Improvements in Speed and Functionality of a 670-GHz Imaging Radar

    NASA Technical Reports Server (NTRS)

    Dengler, Robert J.; Cooper, Ken B.; Mehdi, Imran; Siegel, Peter H.; Tarsala, Jan A.; Bryllert, Thomas E.

    2011-01-01

    Significant improvements have been made in the instrument originally described in a prior NASA Tech Briefs article: Improved Speed and Functionality of a 580-GHz Imaging Radar (NPO-45156), Vol. 34, No. 7 (July 2010), p. 51. First, the wideband YIG oscillator has been replaced with a JPL-designed and built phase-locked, low-noise chirp source. Second, further refinements to the data acquisition and signal processing software have been performed by moving critical code sections to C code, and compiling those sections to Windows DLLs, which are then invoked from the main LabVIEW executive. This system is an active, single-pixel scanned imager operating at 670 GHz. The actual chirp signals for the RF and LO chains were generated by a pair of MITEQ 2.5 3.3 GHz chirp sources. Agilent benchtop synthesizers operating at fixed frequencies around 13 GHz were then used to up-convert the chirp sources to 15.5 16.3 GHz. The resulting signals were then multiplied 36 times by a combination of off-the-shelf millimeter- wave components, and JPL-built 200- GHz doublers and 300- and 600-GHz triplers. The power required to drive the submillimeter-wave multipliers was provided by JPL-built W-band amplifiers. The receive and transmit signal paths were combined using a thin, high-resistivity silicon wafer as a beam splitter. While the results at present are encouraging, the system still lacks sufficient speed to be usable for practical applications in a contraband detection. Ideally, an image acquisition speed of ten seconds, or a factor of 30 improvement, is desired. However, the system improvements to date have resulted in a factor of five increase in signal acquisition speed, as well as enhanced signal processing algorithms, permitting clearer imaging of contraband objects hidden underneath clothing. In particular, advances in three distinct areas have enabled these performance enhancements: base source phase noise reduction, chirp rate, and signal processing. Additionally, a second pixel was added, automatically reducing the imaging time by a factor of two. Although adding a second pixel to the system doubles the amount of submillimeter components required, some savings in microwave hardware can be realized by using a common low-noise source.

  3. Bidirectional light-scattering image processing method for high-concentration jet sprays

    NASA Astrophysics Data System (ADS)

    Shimizu, I.; Emori, Y.; Yang, W.-J.; Shimoda, M.; Suzuki, T.

    1985-01-01

    In order to study the distributions of droplet size and volume density in high-concentration jet sprays, a new technique is developed, which combines the forward and backward light scattering method and an image processing method. A pulsed ruby laser is used as the light source. The Mie scattering theory is applied to the results obtained from image processing on the scattering photographs. The time history is obtained for the droplet size and volume density distributions, and the method is demonstrated by diesel fuel sprays under various injecting conditions. The validity of the technique is verified by a good agreement in the injected fuel volume distributions obtained by the present method and by injection rate measurements.

  4. Development of an imaging system for single droplet characterization using a droplet generator.

    PubMed

    Minov, S Vulgarakis; Cointault, F; Vangeyte, J; Pieters, J G; Hijazi, B; Nuyttens, D

    2012-01-01

    The spray droplets generated by agricultural nozzles play an important role in the application accuracy and efficiency of plant protection products. The limitations of the non-imaging techniques and the recent improvements in digital image acquisition and processing increased the interest in using high speed imaging techniques in pesticide spray characterisation. The goal of this study was to develop an imaging technique to evaluate the characteristics of a single spray droplet using a piezoelectric single droplet generator and a high speed imaging technique. Tests were done with different camera settings, lenses, diffusers and light sources. The experiments have shown the necessity for having a good image acquisition and processing system. Image analysis results contributed in selecting the optimal set-up for measuring droplet size and velocity which consisted of a high speed camera with a 6 micros exposure time, a microscope lens at a working distance of 43 cm resulting in a field of view of 1.0 cm x 0.8 cm and a Xenon light source without diffuser used as a backlight. For measuring macro-spray characteristics as the droplet trajectory, the spray angle and the spray shape, a Macro Video Zoom lens at a working distance of 14.3 cm with a bigger field of view of 7.5 cm x 9.5 cm in combination with a halogen spotlight with a diffuser and the high speed camera can be used.

  5. a Geographic Data Gathering System for Image Geolocalization Refining

    NASA Astrophysics Data System (ADS)

    Semaan, B.; Servières, M.; Moreau, G.; Chebaro, B.

    2017-09-01

    Image geolocalization has become an important research field during the last decade. This field is divided into two main sections. The first is image geolocalization that is used to find out which country, region or city the image belongs to. The second one is refining image localization for uses that require more accuracy such as augmented reality and three dimensional environment reconstruction using images. In this paper we present a processing chain that gathers geographic data from several sources in order to deliver a better geolocalization than the GPS one of an image and precise camera pose parameters. In order to do so, we use multiple types of data. Among this information some are visible in the image and are extracted using image processing, other types of data can be extracted from image file headers or online image sharing platforms related information. Extracted information elements will not be expressive enough if they remain disconnected. We show that grouping these information elements helps finding the best geolocalization of the image.

  6. A Review on Medical Image Registration as an Optimization Problem

    PubMed Central

    Song, Guoli; Han, Jianda; Zhao, Yiwen; Wang, Zheng; Du, Huibin

    2017-01-01

    Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration. PMID:28845149

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniquesmore » to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.« less

  8. Systems Biology-Driven Hypotheses Tested In Vivo: The Need to Advancing Molecular Imaging Tools.

    PubMed

    Verma, Garima; Palombo, Alessandro; Grigioni, Mauro; La Monaca, Morena; D'Avenio, Giuseppe

    2018-01-01

    Processing and interpretation of biological images may provide invaluable insights on complex, living systems because images capture the overall dynamics as a "whole." Therefore, "extraction" of key, quantitative morphological parameters could be, at least in principle, helpful in building a reliable systems biology approach in understanding living objects. Molecular imaging tools for system biology models have attained widespread usage in modern experimental laboratories. Here, we provide an overview on advances in the computational technology and different instrumentations focused on molecular image processing and analysis. Quantitative data analysis through various open source software and algorithmic protocols will provide a novel approach for modeling the experimental research program. Besides this, we also highlight the predictable future trends regarding methods for automatically analyzing biological data. Such tools will be very useful to understand the detailed biological and mathematical expressions under in-silico system biology processes with modeling properties.

  9. Imaging Seismic Source Variations Using Back-Projection Methods at El Tatio Geyser Field, Northern Chile

    NASA Astrophysics Data System (ADS)

    Kelly, C. L.; Lawrence, J. F.

    2014-12-01

    During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and determine variations in source depth and distribution in the conduit and larger geyser field over many eruption cycles.

  10. Color sensitivity of the multi-exposure HDR imaging process

    NASA Astrophysics Data System (ADS)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  11. Coherent optical processing using noncoherent light after source masking.

    PubMed

    Boopathi, V; Vasu, R M

    1992-01-10

    Coherent optical processing starting with spatially noncoherent illumination is described. Good spatial coherence is introduced in the far field by modulating a noncoherent source when masks with sharp autocorrelation are used. The far-field mutual coherence function of light is measured and it is seen that, for the masks and the source size used here, we get a fairly large area over which the mutual coherence function is high and flat. We demonstrate traditional coherent processing operations such as Fourier transformation and image deblurring when coherent light that is produced in the above fashion is used. A coherence-redundancy merit function is defined for this type of processing system. It is experimentally demonstrated that the processing system introduced here has superior blemish tolerance compared with a traditional processor that uses coherent illumination.

  12. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  13. Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness

    NASA Astrophysics Data System (ADS)

    Singh, Preetpal

    Canada is home to thousands of freshwater lakes and rivers. Apart from being sources of infinite natural beauty, rivers and lakes are an important source of water, food and transportation. The northern hemisphere of Canada experiences extreme cold temperatures in the winter resulting in a freeze up of regional lakes and rivers. Frozen lakes and rivers tend to offer unique opportunities in terms of wildlife harvesting and winter transportation. Ice roads built on frozen rivers and lakes are vital supply lines for industrial operations in the remote north. Monitoring the ice freeze-up and break-up dates annually can help predict regional climatic changes. Lake ice impacts a variety of physical, ecological and economic processes. The construction and maintenance of a winter road can cost millions of dollars annually. A good understanding of ice mechanics is required to build and deem an ice road safe. A crucial factor in calculating load bearing capacity of ice sheets is the thickness of ice. Construction costs are mainly attributed to producing and maintaining a specific thickness and density of ice that can support different loads. Climate change is leading to warmer temperatures causing the ice to thin faster. At a certain point, a winter road may not be thick enough to support travel and transportation. There is considerable interest in monitoring winter road conditions given the high construction and maintenance costs involved. Remote sensing technologies such as Synthetic Aperture Radar have been successfully utilized to study the extent of ice covers and record freeze-up and break-up dates of ice on lakes and rivers across the north. Ice road builders often used Ultrasound equipment to measure ice thickness. However, an automated monitoring system, based on machine vision and image processing technology, which can measure ice thickness on lakes has not been thought of. Machine vision and image processing techniques have successfully been used in manufacturing to detect equipment failure and identify defective products at the assembly line. The research work in this thesis combines machine vision and image processing technology to build a digital imaging and processing system for monitoring and measuring lake ice thickness in real time. An ultra-compact USB camera is programmed to acquire and transmit high resolution imagery for processing with MATLAB Image Processing toolbox. The image acquisition and transmission process is fully automated; image analysis is semi-automated and requires limited user input. Potential design changes to the prototype and ideas on fully automating the imaging and processing procedure are presented to conclude this research work.

  14. Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.

    2016-08-01

    Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.

  15. The Use of Multiple Data Sources in the Process of Topographic Maps Updating

    NASA Astrophysics Data System (ADS)

    Cantemir, A.; Visan, A.; Parvulescu, N.; Dogaru, M.

    2016-06-01

    The methods used in the process of updating maps have evolved and become more complex, especially upon the development of the digital technology. At the same time, the development of technology has led to an abundance of available data that can be used in the updating process. The data sources came in a great variety of forms and formats from different acquisition sensors. Satellite images provided by certain satellite missions are now available on space agencies portals. Images stored in archives of satellite missions such us Sentinel, Landsat and other can be downloaded free of charge.The main advantages are represented by the large coverage area and rather good spatial resolution that enables the use of these images for the map updating at an appropriate scale. In our study we focused our research of these images on 1: 50.000 scale map. DEM that are globally available could represent an appropriate input for watershed delineation and stream network generation, that can be used as support for hydrography thematic layer update. If, in addition to remote sensing aerial photogrametry and LiDAR data are ussed, the accuracy of data sources is enhanced. Ortophotoimages and Digital Terrain Models are the main products that can be used for feature extraction and update. On the other side, the use of georeferenced analogical basemaps represent a significant addition to the process. Concerning the thematic maps, the classic representation of the terrain by contour lines derived from DTM, remains the best method of surfacing the earth on a map, nevertheless the correlation with other layers such as Hidrography are mandatory. In the context of the current national coverage of the Digital Terrain Model, one of the main concerns of the National Center of Cartography, through the Cartography and Photogrammetry Department, is represented by the exploitation of the available data in order to update the layers of the Topographic Reference Map 1:5000, known as TOPRO5 and at the same time, through the generalization and additional data sources of the Romanian 1:50 000 scale map. This paper also investigates the general perspective of DTM automatic use derived products in the process of updating the topographic maps.

  16. LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.

    PubMed

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2015-03-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography at 400 kHz

    NASA Astrophysics Data System (ADS)

    El-Haddad, Mohamed T.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Multimodal imaging systems that combine scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) have demonstrated the utility of concurrent en face and volumetric imaging for aiming, eye tracking, bulk motion compensation, mosaicking, and contrast enhancement. However, this additional functionality trades off with increased system complexity and cost because both SLO and OCT generally require dedicated light sources, galvanometer scanners, relay and imaging optics, detectors, and control and digitization electronics. We previously demonstrated multimodal ophthalmic imaging using swept-source spectrally encoded SLO and OCT (SS-SESLO-OCT). Here, we present system enhancements and a new optical design that increase our SS-SESLO-OCT data throughput by >7x and field-of-view (FOV) by >4x. A 200 kHz 1060 nm Axsun swept-source was optically buffered to 400 kHz sweep-rate, and SESLO and OCT were simultaneously digitized on dual input channels of a 4 GS/s digitizer at 1.2 GS/s per channel using a custom k-clock. We show in vivo human imaging of the anterior segment out to the limbus and retinal fundus over a >40° FOV. In addition, nine overlapping volumetric SS-SESLO-OCT volumes were acquired under video-rate SESLO preview and guidance. In post-processing, all nine SESLO images and en face projections of the corresponding OCT volumes were mosaicked to show widefield multimodal fundus imaging with a >80° FOV. Concurrent multimodal SS-SESLO-OCT may have applications in clinical diagnostic imaging by enabling aiming, image registration, and multi-field mosaicking and benefit intraoperative imaging by allowing for real-time surgical feedback, instrument tracking, and overlays of computationally extracted image-based surrogate biomarkers of disease.

  18. LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images

    PubMed Central

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188

  19. Image steganalysis using Artificial Bee Colony algorithm

    NASA Astrophysics Data System (ADS)

    Sajedi, Hedieh

    2017-09-01

    Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.

  20. Weighted image de-fogging using luminance dark prior

    NASA Astrophysics Data System (ADS)

    Kansal, Isha; Kasana, Singara Singh

    2017-10-01

    In this work, the weighted image de-fogging process based upon dark channel prior is modified by using luminance dark prior. Dark channel prior estimates the transmission by using three colour channels whereas luminance dark prior does the same by making use of only Y component of YUV colour space. For each pixel in a patch of ? size, the luminance dark prior uses ? pixels, rather than ? pixels used in DCP technique, which speeds up the de-fogging process. To estimate the transmission map, weighted approach based upon difference prior is used which mitigates halo artefacts at the time of transmission estimation. The major drawback of weighted technique is that it does not maintain the constancy of the transmission in a local patch even if there are no significant depth disruptions, due to which the de-fogged image looks over smooth and has low contrast. Apart from this, in some images, weighted transmission still carries less visible halo artefacts. Therefore, Gaussian filter is used to blur the estimated weighted transmission map which enhances the contrast of de-fogged images. In addition to this, a novel approach is proposed to remove the pixels belonging to bright light source(s) during the atmospheric light estimation process based upon histogram of YUV colour space. To show the effectiveness, the proposed technique is compared with existing techniques. This comparison shows that the proposed technique performs better than the existing techniques.

  1. Visualizing medium and biodistribution in complex cell culture bioreactors using in vivo imaging.

    PubMed

    Ratcliffe, E; Thomas, R J; Stacey, A J

    2014-01-01

    There is a dearth of technology and methods to aid process characterization, control and scale-up of complex culture platforms that provide niche micro-environments for some stem cell-based products. We have demonstrated a novel use of 3d in vivo imaging systems to visualize medium flow and cell distribution within a complex culture platform (hollow fiber bioreactor) to aid characterization of potential spatial heterogeneity and identify potential routes of bioreactor failure or sources of variability. This can then aid process characterization and control of such systems with a view to scale-up. Two potential sources of variation were observed with multiple bioreactors repeatedly imaged using two different imaging systems: shortcutting of medium between adjacent inlet and outlet ports with the potential to create medium gradients within the bioreactor, and localization of bioluminescent murine 4T1-luc2 cells upon inoculation with the potential to create variable seeding densities at different points within the cell growth chamber. The ability of the imaging technique to identify these key operational bioreactor characteristics demonstrates an emerging technique in troubleshooting and engineering optimization of bioreactor performance. © 2013 American Institute of Chemical Engineers.

  2. Earthquake source imaging by high-resolution array analysis at regional distances: the 2010 M7 Haiti earthquake as seen by the Venezuela National Seismic Network

    NASA Astrophysics Data System (ADS)

    Meng, L.; Ampuero, J. P.; Rendon, H.

    2010-12-01

    Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification of biased uncertainty of the back projection. Preliminary results from the Venezuela data set shows an East to West rupture propagation along the fault with sub-Rayleigh rupture speed, consistent with a compact source with two significant asperities which are confirmed by source time function obtained from Green’s function deconvolution and other source inversion results. These efforts could lead the Venezuela National Seismic Network to play a prominent role in the timely characterization of the rupture process of large earthquakes in the Caribbean, including the future ruptures along the yet unbroken segments of the Enriquillo fault system.

  3. SU-F-J-183: Interior Region-Of-Interest Tomography by Using Inverse Geometry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K; Kim, D; Kang, S

    2016-06-15

    Purpose: The inverse geometry computed tomography (IGCT) composed of multiple source and small size detector has several merits such as reduction of scatter effect and large volumetric imaging within one rotation without cone-beam artifact, compared to conventional cone-beam computed tomography (CBCT). By using this multi-source characteristics, we intend to present a selective and multiple interior region-of-interest (ROI) imaging method by using a designed source on-off sequence of IGCT. Methods: All of the IGCT sources are operated one by one sequentially, and each projection in the shape of narrow cone-beam covers its own partial volume of full field of view (FOV)more » determined from system geometry. Thus, through controlling multi source operation, limited irradiation within ROI is possible and selective radon space data for ROI imaging can be acquired without additional X-ray filtration. With this feature, we designed a source on-off sequence for multi ROI-IGCT imaging, and projections of ROI-IGCT were generated by using the on-off sequence. Multi ROI-IGCT images were reconstructed by using filtered back-projection algorithm. All these imaging process of our study has been performed by utilizing digital phantom and patient CT data. ROI-IGCT images of the phantom were compared to CBCT image and the phantom data for the image quality evaluation. Results: Image quality of ROI-IGCT was comparable to that of CBCT. However, the distal axial-plane from the FOV center, large cone-angle region, ROI-IGCT showed uniform image quality without significant cone-beam artifact contrary to CBCT. Conclusion: ROI-IGCT showed comparable image quality and has the capability to provide multi ROI image within a rotation. Projection of ROI-IGCT is performed by selective irradiation, hence unnecessary imaging dose to non-interest region can be reduced. In this regard, it seems to be useful for diagnostic or image guidance purpose in radiotherapy such as low dose target localization and patient alignment. This research was supported by the Mid-career Researcher Program through NRF funded by the Ministry of Science, ICT & Future Planning of Korea (NRF-2014R1A2A1A10050270) and by the Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  4. Multispectral imaging system for contaminant detection

    NASA Technical Reports Server (NTRS)

    Poole, Gavin H. (Inventor)

    2003-01-01

    An automated inspection system for detecting digestive contaminants on food items as they are being processed for consumption includes a conveyor for transporting the food items, a light sealed enclosure which surrounds a portion of the conveyor, with a light source and a multispectral or hyperspectral digital imaging camera disposed within the enclosure. Operation of the conveyor, light source and camera are controlled by a central computer unit. Light reflected by the food items within the enclosure is detected in predetermined wavelength bands, and detected intensity values are analyzed to detect the presence of digestive contamination.

  5. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  6. CFHT data processing and calibration ESPaDOnS pipeline: Upena and OPERA (optical spectropolarimetry)

    NASA Astrophysics Data System (ADS)

    Martioli, Eder; Teeple, D.; Manset, Nadine

    2011-03-01

    CFHT is ESPaDOnS responsible for processing raw images, removing instrument related artifacts, and delivering science-ready data to the PIs. Here we describe the Upena pipeline, which is the software used to reduce the echelle spectro-polarimetric data obtained with the ESPaDOnS instrument. Upena is an automated pipeline that performs calibration and reduction of raw images. Upena has the capability of both performing real-time image-by-image basis reduction and a post observing night complete reduction. Upena produces polarization and intensity spectra in FITS format. The pipeline is designed to perform parallel computing for improved speed, which assures that the final products are delivered to the PIs before noon HST after each night of observations. We also present the OPERA project, which is an open-source pipeline to reduce ESPaDOnS data that will be developed as a collaborative work between CFHT and the scientific community. OPERA will match the core capabilities of Upena and in addition will be open-source, flexible and extensible.

  7. PLUS: open-source toolkit for ultrasound-guided intervention systems.

    PubMed

    Lasso, Andras; Heffter, Tamas; Rankin, Adam; Pinter, Csaba; Ungi, Tamas; Fichtinger, Gabor

    2014-10-01

    A variety of advanced image analysis methods have been under the development for ultrasound-guided interventions. Unfortunately, the transition from an image analysis algorithm to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and tracking devices, data processing algorithms, and visualization software. The objective of our paper is to provide a freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose and ultrasound image acquisition from a wide range of tracking and imaging devices, spatial and temporal calibration, volume reconstruction, simulated image generation, and recording and live streaming of the acquired data. This paper introduces PLUS, explains its functionality and architecture, and presents typical uses and performance in ultrasound-guided intervention systems. PLUS fulfills the essential requirements for the development of ultrasound-guided intervention systems and it aspires to become a widely used translational research prototyping platform. PLUS is freely available as open source software under BSD license and can be downloaded from http://www.plustoolkit.org.

  8. Continuum generation in optical fibers for high-resolution holographic coherence domain imaging application

    NASA Astrophysics Data System (ADS)

    Li, Linghui; Gruzdev, Vitaly; Yu, Ping; Chen, J. K.

    2009-02-01

    High pulse energy continuum generation in conventional multimode optical fibers has been studied for potential applications to a holographic optical coherence imaging system. As a new imaging modality for the biological tissue imaging, high-resolution holographic optical coherence imaging requires a broadband light source with a high brightness, a relatively low spatial coherence and a high stability. A broadband femtosecond laser can not be used as the light source of holographic imaging system since the laser creates a lot of speckle patterns. By coupling high peak power femtosecond laser pulses into a multimode optical fiber, nonlinear optical effects cause a continuum generation that can be served as a super-bright and broadband light source. In our experiment, an amplified femtosecond laser was coupled into the fiber through a microscopic objective. We measured the FWHM of the continuum generation as a function of incident pulse energy from 80 nJ to 800 μJ. The maximum FWHM is about 8 times higher than that of the input pulses. The stability was analyzed at different pump energies, integration times and fiber lengths. The spectral broadening and peak position show that more than two processes compete in the fiber.

  9. Distributed Kernelized Locality-Sensitive Hashing for Faster Image Based Navigation

    DTIC Science & Technology

    2015-03-26

    Facebook, Google, and Yahoo !. Current methods for image retrieval become problematic when implemented on image datasets that can easily reach billions of...correlations. Tech industry leaders like Facebook, Google, and Yahoo ! sort and index even larger volumes of “big data” daily. When attempting to process...open source implementation of Google’s MapReduce programming paradigm [13] which has been used for many different things. Using Apache Hadoop, Yahoo

  10. The Stellar Imager (SI) - A Mission to Resolve Stellar Surfaces, Interiors, and Magnetic Activity

    NASA Astrophysics Data System (ADS)

    Carpenter, K. G.; Schrijver, C. J.; Karovska, M.; Si Vision Mission Team

    2009-09-01

    The Stellar Imager (SI) is a UV/Optical, Space-Based Interferometer designed to enable 0.1 milli-arcsecond (mas) spectral imaging of stellar surfaces and, via asteroseismology, stellar interiors and of the Universe in general. The ultra-sharp images of the Stellar Imager will revolutionize our view of many dynamic astrophysical processes by transforming point sources into extended sources, and snapshots into evolving views. SI's science focuses on the role of magnetism in the Universe, particularly on magnetic activity on the surfaces of stars like the Sun. SI's prime goal is to enable long-term forecasting of solar activity and the space weather that it drives. SI will also revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magneto-hydrodynamically controlled processes in the Universe. SI is included as a ``Flagship and Landmark Discovery Mission'' in the 2005 NASA Sun Solar System Connection (SSSC) Roadmap and as a candidate for a ``Pathways to Life Observatory'' in the NASA Exploration of the Universe Division (EUD) Roadmap (May, 2005). In this paper we discuss the science goals and technology needs of, and the baseline design for, the SI Mission (http://hires.gsfc.nasa.gov/si/) and its ability to image the Biggest, Baddest, Coolest Stars.

  11. The Stellar Imager (SI) - A Mission to Resolve Stellar Surfaces, Interiors, and Magnetic Activity

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth; Schrijver, Carolus J.; Karovska, Margarita

    2007-01-01

    The Stellar Imager (SI) is a UV/Optical, Space-Based Interferometer designed to enable 0.1 milli-arcsecond (mas) spectral imaging of stellar surfaces and, via asteroseismology, stellar interiors and of the Universe in general. The ultra-sharp images of the Stellar Imager will revolutionize our view of many dynamic astrophysical processes by transforming point sources into extended sources, and snapshots into evolving views. SI's science focuses on the role of magnetism in the Universe, particularly on magnetic activity on the surfaces of stars like the Sun. SI's prime goal is to enable long-term forecasting of solar activity and the space weather that it drives. SI will also revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magneto-hydrodynamically controlled processes in the Universe. SI is included as a 'Flagship and Landmark Discovery Mission' in the 2005 NASA Sun Solar System Connection (SSSC) Roadmap and as a candidate for a 'Pathways to Life Observatory' in the NASA Exploration of the Universe Division (EUD) Roadmap (May, 2005). In this paper we discuss the science goals and technology needs of, and the baseline design for, the SI Mission (http://hires.gsfc.nasa.gov/si/) its ability to image the 'Biggest, Baddest, Coolest Stars'.

  12. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  13. GRI: The Gamma-Ray Imager mission

    NASA Astrophysics Data System (ADS)

    Knödlseder, Jürgen; GRI Consortium

    With the INTEGRAL observatory ESA has provided a unique tool to the astronomical community revealing hundreds of sources, new classes of objects, extraordinary views of antimatter annihilation in our Galaxy, and fingerprints of recent nucleosynthesis processes. While INTEGRAL provides the global overview over the soft gamma-ray sky, there is a growing need to perform deeper, more focused investigations of gamma-ray sources. In soft X-rays a comparable step was taken going from the Einstein and the EXOSAT satellites to the Chandra and XMM/Newton observatories. Technological advances in the past years in the domain of gamma-ray focusing using Laue diffraction have paved the way towards a new gamma-ray mission, providing major improvements regarding sensitivity and angular resolution. Such a future Gamma-Ray Imager will allow studies of particle acceleration processes and explosion physics in unprecedented detail, providing essential clues on the innermost nature of the most violent and most energetic processes in the Universe.

  14. GRI: The Gamma-Ray Imager mission

    NASA Astrophysics Data System (ADS)

    Knödlseder, Jürgen; GRI Consortium

    2006-06-01

    With the INTEGRAL observatory, ESA has provided a unique tool to the astronomical community revealing hundreds of sources, new classes of objects, extraordinary views of antimatter annihilation in our Galaxy, and fingerprints of recent nucleosynthesis processes. While INTEGRAL provides the global overview over the soft gamma-ray sky, there is a growing need to perform deeper, more focused investigations of gamma-ray sources. In soft X-rays a comparable step was taken going from the Einstein and the EXOSAT satellites to the Chandra and XMM/Newton observatories. Technological advances in the past years in the domain of gamma-ray focusing using Laue diffraction have paved the way towards a new gamma-ray mission, providing major improvements regarding sensitivity and angular resolution. Such a future Gamma-Ray Imager will allow the study of particle acceleration processes and explosion physics in unprecedented detail, providing essential clues on the innermost nature of the most violent and most energetic processes in the Universe.

  15. High-energy solar flare observations at the Y2K maximum

    NASA Astrophysics Data System (ADS)

    Emslie, A. Gordon

    2000-04-01

    Solar flares afford an opportunity to observe processes associated with the acceleration and propagation of high-energy particles at a level of detail not accessible in any other astrophysical source. I will review some key results from previous high-energy solar flare observations, including those from the Compton Gamma-Ray Observatory, and the problems that they pose for our understanding of energy release and particle acceleration processes in the astrophysical environment. I will then discuss a program of high-energy observations to be carried out during the upcoming 2000-2001 solar maximum that is aimed at addressing and resolving these issues. A key element in this observational program is the High Energy Solar Spectroscopic Imager (HESSI) spacecraft, which will provide imaging spectroscopic observations with spatial, temporal, and energy resolutions commensurate with the physical processes believed to be operating, and will in addition provide the first true gamma-ray spectroscopy of an astrophysical source. .

  16. The Dark Energy Survey Image Processing Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morganson, E.; et al.

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On amore » bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.« less

  17. Sources of image degradation in fundamental and harmonic ultrasound imaging using nonlinear, full-wave simulations.

    PubMed

    Pinton, Gianmarco F; Trahey, Gregg E; Dahl, Jeremy J

    2011-04-01

    A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain (FDTD). This numerical method is used to simulate propagation of a diagnostic ultrasound pulse through a measured representation of the human abdomen with heterogeneities in speed of sound, attenuation, density, and nonlinearity. Conventional delay-andsum beamforming is used to generate point spread functions (PSF) that display the effects of these heterogeneities. For the particular imaging configuration that is modeled, these PSFs reveal that the primary source of degradation in fundamental imaging is reverberation from near-field structures. Reverberation clutter in the harmonic PSF is 26 dB higher than the fundamental PSF. An artificial medium with uniform velocity but unchanged impedance characteristics indicates that for the fundamental PSF, the primary source of degradation is phase aberration. An ultrasound image is created in silico using the same physical and algorithmic process used in an ultrasound scanner: a series of pulses are transmitted through heterogeneous scattering tissue and the received echoes are used in a delay-and-sum beamforming algorithm to generate images. These beamformed images are compared with images obtained from convolution of the PSF with a scatterer field to demonstrate that a very large portion of the PSF must be used to accurately represent the clutter observed in conventional imaging. © 2011 IEEE

  18. VizieR Online Data Catalog: RASS-6dFGS catalogue (Mahony+, 2010)

    NASA Astrophysics Data System (ADS)

    Mahony, E. K.; Croom, S. M.; Boyle, B. J.; Edge, A. C.; Mauch, T.; Sadler, E. M.

    2014-09-01

    Objects were selected such that the dominant source of X-ray emission originates from an AGN. The target list was selected from the southern sources (δ<=0°) of the RBSC, a total of 9578 sources. Sources were then checked for optical identifications via a visual inspection process using Digitized Sky Survey (DSS) images. The majority of the optical positions were taken from the United States Naval Observatory (USNO) data base, with the remainder taken from either the Automated Plate Measuring (APM) or DSS catalogues. Positions from these latter catalogues were used when the USNO appeared to give an incorrect position according to the DSS images. Optical magnitudes were taken from the USNO-A2.0 catalogue (Monet 1998, Cat. I/252). (2 data files).

  19. Swept-frequency feedback interferometry using terahertz frequency QCLs: a method for imaging and materials analysis.

    PubMed

    Rakić, Aleksandar D; Taimre, Thomas; Bertling, Karl; Lim, Yah Leng; Dean, Paul; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Valavanis, Alexander; Khanna, Suraj P; Lachab, Mohammad; Wilson, Stephen J; Linfield, Edmund H; Davies, A Giles

    2013-09-23

    The terahertz (THz) frequency quantum cascade laser (QCL) is a compact source of high-power radiation with a narrow intrinsic linewidth. As such, THz QCLs are extremely promising sources for applications including high-resolution spectroscopy, heterodyne detection, and coherent imaging. We exploit the remarkable phase-stability of THz QCLs to create a coherent swept-frequency delayed self-homodyning method for both imaging and materials analysis, using laser feedback interferometry. Using our scheme we obtain amplitude-like and phase-like images with minimal signal processing. We determine the physical relationship between the operating parameters of the laser under feedback and the complex refractive index of the target and demonstrate that this coherent detection method enables extraction of complex refractive indices with high accuracy. This establishes an ultimately compact and easy-to-implement THz imaging and materials analysis system, in which the local oscillator, mixer, and detector are all combined into a single laser.

  20. 2D dose distribution images of a hybrid low field MRI-γ detector

    NASA Astrophysics Data System (ADS)

    Abril, A.; Agulles-Pedrós, L.

    2016-07-01

    The proposed hybrid system is a combination of a low field MRI and dosimetric gel as a γ detector. The readout system is based on the polymerization process induced by the gel radiation. A gel dose map is obtained which represents the functional part of hybrid image alongside with the anatomical MRI one. Both images should be taken while the patient with a radiopharmaceutical is located inside the MRI system with a gel detector matrix. A relevant aspect of this proposal is that the dosimetric gel has never been used to acquire medical images. The results presented show the interaction of the 99mTc source with the dosimetric gel simulated in Geant4. The purpose was to obtain the planar γ 2D-image. The different source configurations are studied to explore the ability of the gel as radiation detector through the following parameters; resolution, shape definition and radio-pharmaceutical concentration.

  1. Adaptive illumination source for multispectral vision system applied to material discrimination

    NASA Astrophysics Data System (ADS)

    Conde, Olga M.; Cobo, Adolfo; Cantero, Paulino; Conde, David; Mirapeix, Jesús; Cubillas, Ana M.; López-Higuera, José M.

    2008-04-01

    A multispectral system based on a monochrome camera and an adaptive illumination source is presented in this paper. Its preliminary application is focused on material discrimination for food and beverage industries, where monochrome, color and infrared imaging have been successfully applied for this task. This work proposes a different approach, in which the relevant wavelengths for the required discrimination task are selected in advance using a Sequential Forward Floating Selection (SFFS) Algorithm. A light source, based on Light Emitting Diodes (LEDs) at these wavelengths is then used to sequentially illuminate the material under analysis, and the resulting images are captured by a CCD camera with spectral response in the entire range of the selected wavelengths. Finally, the several multispectral planes obtained are processed using a Spectral Angle Mapping (SAM) algorithm, whose output is the desired material classification. Among other advantages, this approach of controlled and specific illumination produces multispectral imaging with a simple monochrome camera, and cold illumination restricted to specific relevant wavelengths, which is desirable for the food and beverage industry. The proposed system has been tested with success for the automatic detection of foreign object in the tobacco processing industry.

  2. Directional analysis and filtering for dust storm detection in NOAA-AVHRR imagery

    NASA Astrophysics Data System (ADS)

    Janugani, S.; Jayaram, V.; Cabrera, S. D.; Rosiles, J. G.; Gill, T. E.; Rivera Rivera, N.

    2009-05-01

    In this paper, we propose spatio-spectral processing techniques for the detection of dust storms and automatically finding its transport direction in 5-band NOAA-AVHRR imagery. Previous methods that use simple band math analysis have produced promising results but have drawbacks in producing consistent results when low signal to noise ratio (SNR) images are used. Moreover, in seeking to automate the dust storm detection, the presence of clouds in the vicinity of the dust storm creates a challenge in being able to distinguish these two types of image texture. This paper not only addresses the detection of the dust storm in the imagery, it also attempts to find the transport direction and the location of the sources of the dust storm. We propose a spatio-spectral processing approach with two components: visualization and automation. Both approaches are based on digital image processing techniques including directional analysis and filtering. The visualization technique is intended to enhance the image in order to locate the dust sources. The automation technique is proposed to detect the transport direction of the dust storm. These techniques can be used in a system to provide timely warnings of dust storms or hazard assessments for transportation, aviation, environmental safety, and public health.

  3. SU-F-T-61: Treatment Planning Observations for the CivaSheet Directional Brachytherapy Device Using VariSeed 9.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivard, MJ; Rothley, DJ

    2016-06-15

    Purpose: The VariSeed 9.0 brachytherapy TPS is recently available and has new features such as ability to rotate a brachytherapy source away from normal to the imaging plane. Consequently, a dosimetric analysis was performed for a directional brachytherapy source (CivaSheet) with tests of this functionality and experiences from clinical treatment planning were documented. These observations contribute to safe, practical, and accurate use of such new software features. Methods: Several tests were established to evaluate the new rotational feature, specific to the CivaSheet for the first patients treated using this new brachytherapy device. These included suitability of imaging slice-thickness and in-planemore » resolution, window/level adjustments for brachytherapy source visualization, commissioning the source physical length for performing rotations, and using different planar and 3D window views to identify source orientation. Additional CivaSheet-specific tests were performed to determine the dosimetric influence on target coverage: changing the source tilt angle, source positioning in the treatment plan based on the CivaSheet rectangular array of CivaDots, and influence of prescription depth on the necessary treatment margin for adequate target coverage. Results: Higher imaging-resolution produced better accuracy for source orientation and positioning, with sub-millimeter CT slice-thickness and in-plane resolution preferred. Source rotation was possible only in sagittal or coronal views. The process for validating source orientation required iteratively altering rotations then checking them in the 3D view, which was cumbersome given the absence of quantitative plan documentation to indicate orientation. Given the small Pd-103 source size, influence of source tilt within 30° was negligible for <1.0 cm. Influence of source position was important when the source was positioned in/out of the adjacent source plane, causing changes of 15%, 7%, and 3% at depths of 0.5, 0.7, and 1.0 cm. Conclusion: The new TPS rotational feature worked well, but several issues were identified to improve the treatment planning process. Research supported in part by CivaTech Oncology, Inc. for Dr. Rivard.« less

  4. Improved 3D seismic images of dynamic deformation in the Nankai Trough off Kumano

    NASA Astrophysics Data System (ADS)

    Shiraishi, K.; Moore, G. F.; Yamada, Y.; Kinoshita, M.; Sanada, Y.; Kimura, G.

    2016-12-01

    In order to improve the seismic reflection image of dynamic deformation and seismogenic faults in the Nankai trough, the 2006 Kumano 3D seismic dataset was reprocessed from the original field records by applying advanced technologies a decade after the data acquisition and initial processing. The 3D seismic survey revealed the geometry of megasplay fault system. However, there were still unclear regions in the accretionary prism beneath from Kumano basin to the outer ridge, because of sea floor multiple reflections and noise caused by the Kuroshio current. For the next stage of deep scientific drilling into the Nankai trough seismogenic zone, it is essential to know exactly the shape and depth of the megasplay, and fine structures around the drilling site. Three important improvements were achieved in data processing before imaging. First, full deghosting and optimized zero phasing techniques could recover broadband signals, especially in low frequency, by compensating for ghost effects at both source and receiver, and removing source bubbles. Second, the multiple reflections better attenuated by applying advanced techniques in combination, and the strong noise caused by the Kuroshio were attenuated carefully. Third, data regularization by means of the optimized 4D trace interpolation was effective both to mitigate non-uniform fold distribution and to improve data quality. Further imaging processes led to obvious improvement from previous results by applying PSTM with higher order correction of VTI anisotropy, and PSDM based on the velocity model built by reflection tomography with TTI anisotropy. Final reflection images show new geological aspects, such as clear steep dip faults around the "notch", and fine scale faults related to main thrusts in frontal thrust zone. The improved images will highly contribute to understanding the deformation process in the old accretionary prism and seismogenic features related to the megasplay faults.

  5. Implementation issues of the nearfield equivalent source imaging microphone array

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen

    2011-01-01

    This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.

  6. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  7. TANGO: a generic tool for high-throughput 3D image analysis for studying nuclear organization.

    PubMed

    Ollion, Jean; Cochennec, Julien; Loll, François; Escudé, Christophe; Boudier, Thomas

    2013-07-15

    The cell nucleus is a highly organized cellular organelle that contains the genetic material. The study of nuclear architecture has become an important field of cellular biology. Extracting quantitative data from 3D fluorescence imaging helps understand the functions of different nuclear compartments. However, such approaches are limited by the requirement for processing and analyzing large sets of images. Here, we describe Tools for Analysis of Nuclear Genome Organization (TANGO), an image analysis tool dedicated to the study of nuclear architecture. TANGO is a coherent framework allowing biologists to perform the complete analysis process of 3D fluorescence images by combining two environments: ImageJ (http://imagej.nih.gov/ij/) for image processing and quantitative analysis and R (http://cran.r-project.org) for statistical processing of measurement results. It includes an intuitive user interface providing the means to precisely build a segmentation procedure and set-up analyses, without possessing programming skills. TANGO is a versatile tool able to process large sets of images, allowing quantitative study of nuclear organization. TANGO is composed of two programs: (i) an ImageJ plug-in and (ii) a package (rtango) for R. They are both free and open source, available (http://biophysique.mnhn.fr/tango) for Linux, Microsoft Windows and Macintosh OSX. Distribution is under the GPL v.2 licence. thomas.boudier@snv.jussieu.fr Supplementary data are available at Bioinformatics online.

  8. Tracking and Quantifying Developmental Processes in C. elegans Using Open-source Tools.

    PubMed

    Dutta, Priyanka; Lehmann, Christina; Odedra, Devang; Singh, Deepika; Pohl, Christian

    2015-12-16

    Quantitatively capturing developmental processes is crucial to derive mechanistic models and key to identify and describe mutant phenotypes. Here protocols are presented for preparing embryos and adult C. elegans animals for short- and long-term time-lapse microscopy and methods for tracking and quantification of developmental processes. The methods presented are all based on C. elegans strains available from the Caenorhabditis Genetics Center and on open-source software that can be easily implemented in any laboratory independently of the microscopy system used. A reconstruction of a 3D cell-shape model using the modelling software IMOD, manual tracking of fluorescently-labeled subcellular structures using the multi-purpose image analysis program Endrov, and an analysis of cortical contractile flow using PIVlab (Time-Resolved Digital Particle Image Velocimetry Tool for MATLAB) are shown. It is discussed how these methods can also be deployed to quantitatively capture other developmental processes in different models, e.g., cell tracking and lineage tracing, tracking of vesicle flow.

  9. Computer-aided classification of breast microcalcification clusters: merging of features from image processing and radiologists

    NASA Astrophysics Data System (ADS)

    Lo, Joseph Y.; Gavrielides, Marios A.; Markey, Mia K.; Jesneck, Jonathan L.

    2003-05-01

    We developed an ensemble classifier for the task of computer-aided diagnosis of breast microcalcification clusters,which are very challenging to characterize for radiologists and computer models alike. The purpose of this study is to help radiologists identify whether suspicious calcification clusters are benign vs. malignant, such that they may potentially recommend fewer unnecessary biopsies for actually benign lesions. The data consists of mammographic features extracted by automated image processing algorithms as well as manually interpreted by radiologists according to a standardized lexicon. We used 292 cases from a publicly available mammography database. From each cases, we extracted 22 image processing features pertaining to lesion morphology, 5 radiologist features also pertaining to morphology, and the patient age. Linear discriminant analysis (LDA) models were designed using each of the three data types. Each local model performed poorly; the best was one based upon image processing features which yielded ROC area index AZ of 0.59 +/- 0.03 and partial AZ above 90% sensitivity of 0.08 +/- 0.03. We then developed ensemble models using different combinations of those data types, and these models all improved performance compared to the local models. The final ensemble model was based upon 5 features selected by stepwise LDA from all 28 available features. This ensemble performed with AZ of 0.69 +/- 0.03 and partial AZ of 0.21 +/- 0.04, which was statistically significantly better than the model based on the image processing features alone (p<0.001 and p=0.01 for full and partial AZ respectively). This demonstrated the value of the radiologist-extracted features as a source of information for this task. It also suggested there is potential for improved performance using this ensemble classifier approach to combine different sources of currently available data.

  10. Vanderbilt University Institute of Imaging Science Center for Computational Imaging XNAT: A multimodal data archive and processing environment.

    PubMed

    Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A

    2016-01-01

    The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Performance Evaluation of 18F Radioluminescence Microscopy Using Computational Simulation

    PubMed Central

    Wang, Qian; Sengupta, Debanti; Kim, Tae Jin; Pratx, Guillem

    2017-01-01

    Purpose Radioluminescence microscopy can visualize the distribution of beta-emitting radiotracers in live single cells with high resolution. Here, we perform a computational simulation of 18F positron imaging using this modality to better understand how radioluminescence signals are formed and to assist in optimizing the experimental setup and image processing. Methods First, the transport of charged particles through the cell and scintillator and the resulting scintillation is modeled using the GEANT4 Monte-Carlo simulation. Then, the propagation of the scintillation light through the microscope is modeled by a convolution with a depth-dependent point-spread function, which models the microscope response. Finally, the physical measurement of the scintillation light using an electron-multiplying charge-coupled device (EMCCD) camera is modeled using a stochastic numerical photosensor model, which accounts for various sources of noise. The simulated output of the EMCCD camera is further processed using our ORBIT image reconstruction methodology to evaluate the endpoint images. Results The EMCCD camera model was validated against experimentally acquired images and the simulated noise, as measured by the standard deviation of a blank image, was found to be accurate within 2% of the actual detection. Furthermore, point-source simulations found that a reconstructed spatial resolution of 18.5 μm can be achieved near the scintillator. As the source is moved away from the scintillator, spatial resolution degrades at a rate of 3.5 μm per μm distance. These results agree well with the experimentally measured spatial resolution of 30–40 μm (live cells). The simulation also shows that the system sensitivity is 26.5%, which is also consistent with our previous experiments. Finally, an image of a simulated sparse set of single cells is visually similar to the measured cell image. Conclusions Our simulation methodology agrees with experimental measurements taken with radioluminescence microscopy. This in silico approach can be used to guide further instrumentation developments and to provide a framework for improving image reconstruction. PMID:28273348

  12. Advanced microlens and color filter process technology for the high-efficiency CMOS and CCD image sensors

    NASA Astrophysics Data System (ADS)

    Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu

    2000-12-01

    New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.

  13. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms

    PubMed Central

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831

  14. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.

    PubMed

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.

  15. Backprojection of volcanic tremor

    USGS Publications Warehouse

    Haney, Matthew M.

    2014-01-01

    Backprojection has become a powerful tool for imaging the rupture process of global earthquakes. We demonstrate the ability of backprojection to illuminate and track volcanic sources as well. We apply the method to the seismic network from Okmok Volcano, Alaska, at the time of an escalation in tremor during the 2008 eruption. Although we are able to focus the wavefield close to the location of the active cone, the network array response lacks sufficient resolution to reveal kilometer-scale changes in tremor location. By deconvolving the response in successive backprojection images, we enhance resolution and find that the tremor source moved toward an intracaldera lake prior to its escalation. The increased tremor therefore resulted from magma-water interaction, in agreement with the overall phreatomagmatic character of the eruption. Imaging of eruption tremor shows that time reversal methods, such as backprojection, can provide new insights into the temporal evolution of volcanic sources.

  16. Design of a modified endoscope illuminator for spectral imaging of colorectal tissues

    NASA Astrophysics Data System (ADS)

    Browning, Craig M.; Mayes, Samuel; Rich, Thomas C.; Leavesley, Silas J.

    2017-02-01

    The gold standard for locating colonic polyps is a white light endoscope in a colonoscopy, however, polyps smaller than 5 mm can be easily missed. Modified procedures such as narrow band imaging have shown only marginal increases in detection rates. Spectral imaging is a potential solution to improve the sensitivity and specificity of colonoscopies by providing the ability to distinguish molecular fluorescence differences in tissues. The goal of this work is to implement a spectral endoscopic light source to acquire spectral image data of colorectal tissues. A beta-version endoscope light source was developed, by retrofitting a white light endoscope light source (Olympus, CLK-4) with 16 narrow band LEDs. This redesigned, beta-prototype uses high-power LEDs with a minimum output of 500 mW to provide sufficient spectral output (0.5 mW) through the endoscope. A mounting apparatus was designed to provide sufficient heat dissipation. Here, we report recent results of our tests to characterize the intensity output through the light source and endoscope to determine the flat spectral output for imaging and intensity losses through the endoscope. We also report preliminary spectral imaging data from transverse pig colon that demonstrates the ability to result in working practical spectral data. Preliminary results of this revised prototype spectral endoscope system demonstrate that there is sufficient power to allow the imaging process to continue and potentially determine spectral differences in cancerous and normal tissue from imaging ex vivo pairs. Future work will focus on building a spectral library for the colorectal region and refining the user interface the system for in vivo use.

  17. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  18. Measuring temporal stability of positron emission tomography standardized uptake value bias using long-lived sources in a multicenter network.

    PubMed

    Byrd, Darrin; Christopfel, Rebecca; Arabasz, Grae; Catana, Ciprian; Karp, Joel; Lodge, Martin A; Laymon, Charles; Moros, Eduardo G; Budzevich, Mikalai; Nehmeh, Sadek; Scheuermann, Joshua; Sunderland, John; Zhang, Jun; Kinahan, Paul

    2018-01-01

    Positron emission tomography (PET) is a quantitative imaging modality, but the computation of standardized uptake values (SUVs) requires several instruments to be correctly calibrated. Variability in the calibration process may lead to unreliable quantitation. Sealed source kits containing traceable amounts of [Formula: see text] were used to measure signal stability for 19 PET scanners at nine hospitals in the National Cancer Institute's Quantitative Imaging Network. Repeated measurements of the sources were performed on PET scanners and in dose calibrators. The measured scanner and dose calibrator signal biases were used to compute the bias in SUVs at multiple time points for each site over a 14-month period. Estimation of absolute SUV accuracy was confounded by bias from the solid phantoms' physical properties. On average, the intrascanner coefficient of variation for SUV measurements was 3.5%. Over the entire length of the study, single-scanner SUV values varied over a range of 11%. Dose calibrator bias was not correlated with scanner bias. Calibration factors from the image metadata were nearly as variable as scanner signal, and were correlated with signal for many scanners. SUVs often showed low intrascanner variability between successive measurements but were also prone to shifts in apparent bias, possibly in part due to scanner recalibrations that are part of regular scanner quality control. Biases of key factors in the computation of SUVs were not correlated and their temporal variations did not cancel out of the computation. Long-lived sources and image metadata may provide a check on the recalibration process.

  19. Extending RTM Imaging With a Focus on Head Waves

    NASA Astrophysics Data System (ADS)

    Holicki, Max; Drijkoningen, Guy

    2016-04-01

    Conventional industry seismic imaging predominantly focuses on pre-critical reflections, muting post-critical arrivals in the process. This standard approach neglects a lot of information present in the recorded wave field. This negligence has been partially remedied with the inclusion of head waves in more advanced imaging techniques, like Full Waveform Inversion (FWI). We would like to see post-critical information leave the realm of labour-intensive travel-time picking and tomographic inversion towards full migration to improve subsurface imaging and parameter estimation. We present a novel seismic imaging approach aimed at exploiting post-critical information, using the constant travel path for head-waves between shots. To this end, we propose to generalize conventional Reverse Time Migration (RTM) to scenarios where the sources for the forward and backward propagated wave-fields are not coinciding. RTM functions on the principle that backward propagated receiver data, due to a source at some locations, must overlap with the forward propagated source wave field, from the same source location, at subsurface scatterers. Where the wave-fields overlap in the subsurface there is a peak at the zero-lag cross-correlation, and this peak is used for the imaging. For the inclusion of head waves, we propose to relax the condition of coincident sources. This means that wave-fields, from non-coincident-sources, will not overlap properly in the subsurface anymore. We can make the wave-fields overlap in the subsurface again, by time shifting either the forward or backward propagated wave-fields until the wave-fields overlap. This is the same as imaging at non-zero cross-correlation lags, where the lag is the travel time difference between the two wave-fields for a given event. This allows us to steer which arrivals we would like to use for imaging. In the simplest case we could use Eikonal travel-times to generate our migration image, or we exclusively image the subsurface with the head wave from the nth-layer. To illustrate the method we apply it to a layered Earth model with five layers and compare it to conventional RTM. We will show that conventional RTM highlights interfaces, while our head-wave based images highlight layers, producing fundamentally different images. We also demonstrate that our proposed imaging scheme is more sensitive to the velocity model than conventional RTM, which is important for improved velocity model building in the future.

  20. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  1. Microscope-integrated optical coherence tomography for image-aided positioning of glaucoma surgery

    NASA Astrophysics Data System (ADS)

    Li, Xiqi; Wei, Ling; Dong, Xuechuan; Huang, Ping; Zhang, Chun; He, Yi; Shi, Guohua; Zhang, Yudong

    2015-07-01

    Most glaucoma surgeries involve creating new aqueous outflow pathways with the use of a small surgical instrument. This article reported a microscope-integrated, real-time, high-speed, swept-source optical coherence tomography system (SS-OCT) with a 1310-nm light source for glaucoma surgery. A special mechanism was designed to produce an adjustable system suitable for use in surgery. A two-graphic processing unit architecture was used to speed up the data processing and real-time volumetric rendering. The position of the surgical instrument can be monitored and measured using the microscope and a grid-inserted image of the SS-OCT. Finally, experiments were simulated to assess the effectiveness of this integrated system. Experimental results show that this system is a suitable positioning tool for glaucoma surgery.

  2. STARL -- a Program to Correct CCD Image Defects

    NASA Astrophysics Data System (ADS)

    Narbutis, D.; Vanagas, R.; Vansevičius, V.

    We present a program tool, STARL, designed for automatic detection and correction of various defects in CCD images. It uses genetic algorithm for deblending and restoring of overlapping saturated stars in crowded stellar fields. Using Subaru Telescope Suprime-Cam images we demonstrate that the program can be implemented in the wide-field survey data processing pipelines for production of high quality color mosaics. The source code and examples are available at the STARL website.

  3. Synthetic Aperture Imaging Polarimeter: Postprint

    DTIC Science & Technology

    2010-02-01

    mechanical design of the SAlP prototype revol .... es around the concept of a modular array. The modular aspect allows for the array to be built in...imagery of source . The top row images are of the actual fringe pattern incident on the SAlP prototype array. These pictures were taken through the...processed images associated with each of the inputs. The results demonstrated that the SAlP prototype array works in conjunction with the algorithm

  4. Processing challenges in the XMM-Newton slew survey

    NASA Astrophysics Data System (ADS)

    Saxton, Richard D.; Altieri, Bruno; Read, Andrew M.; Freyberg, Michael J.; Esquej, M. P.; Bermejo, Diego

    2005-08-01

    The great collecting area of the mirrors coupled with the high quantum efficiency of the EPIC detectors have made XMM-Newton the most sensitive X-ray observatory flown to date. This is particularly evident during slew exposures which, while giving only 15 seconds of on-source time, actually constitute a 2-10 keV survey ten times deeper than current "all-sky" catalogues. Here we report on progress towards making a catalogue of slew detections constructed from the full, 0.2-12 keV energy band and discuss the challenges associated with processing the slew data. The fast (90 degrees per hour) slew speed results in images which are smeared, by different amounts depending on the readout mode, effectively changing the form of the point spread function. The extremely low background in slew images changes the optimum source searching criteria such that searching a single image using the full energy band is seen to be more sensitive than splitting the data into discrete energy bands. False detections due to optical loading by bright stars, the wings of the PSF in very bright sources and single-frame detector flashes are considered and techniques for identifying and removing these spurious sources from the final catalogue are outlined. Finally, the attitude reconstruction of the satellite during the slewing maneuver is complex. We discuss the implications of this on the positional accuracy of the catalogue.

  5. The effect of different standard illumination conditions on color balance failure in offset printed images on glossy coated paper expressed by color difference

    NASA Astrophysics Data System (ADS)

    Spiridonov, I.; Shopova, M.; Boeva, R.; Nikolov, M.

    2012-05-01

    One of the biggest problems in color reproduction processes is color shifts occurring when images are viewed under different illuminants. Process ink colors and their combinations that match under one light source will often appear different under another light source. This problem is referred to as color balance failure or color inconstancy. The main goals of the present study are to investigate and determine the color balance failure (color inconstancy) of offset printed images expressed by color difference and color gamut changes depending on three of the most commonly used in practice illuminants, CIE D50, CIE F2 and CIE A. The results obtained are important from a scientific and a practical point of view. For the first time, a methodology is suggested and implemented for the examination and estimation of color shifts by studying a large number of color and gamut changes in various ink combinations for different illuminants.

  6. A SmallSat Approach for Global Imaging Spectroscopy of the Earth SYSTEM Enabled by Advanced Technology

    NASA Astrophysics Data System (ADS)

    Green, R. O.; Asner, G. P.; Thompson, D. R.; Mouroulis, P.; Eastwood, M. L.; Chien, S.

    2017-12-01

    Global coverage imaging spectroscopy in the solar reflected energy portion of the spectrum has been identified by the Earth Decadal Survey as an important measurement that enables a diverse set of new and time critical science objectives/targets for the Earth system. These science objectives include biodiversity; ecosystem function; ecosystem biogeochemistry; initialization and constraint of global ecosystem models; fire fuel, combustion, burn severity, and recovery; surface mineralogy, geochemistry, geologic processes, soils, and hazards; global mineral dust source composition; cryospheric albedo, energy balance, and melting; coastal and inland water habitats; coral reefs; point source gas emission; cloud thermodynamic phase; urban system properties; and more. Traceability of these science objectives to spectroscopic measurement in the visible to short wavelength infrared portion of the spectrum is summarized. New approaches, including satellite constellations, to acquire these global imaging spectroscopy measurements is presented drawing from recent advances in optical design, detector technology, instrument architecture, thermal control, on-board processing, data storage, and downlink.

  7. New Techniques for High-Contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline

    NASA Technical Reports Server (NTRS)

    Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Grady, C. A.; hide

    2012-01-01

    We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the Strategic Exploration of Exoplanets and Disks (SEEDS) survey. We implement seyeral new algorithms, includbg a method to centroid saturated images, a trimmed mean for combining an image sequence that reduces noise by up to approx 20%, and a robust and computationally fast method to compute the sensitivitv of a high-contrast obsen-ation everywhere on the field-of-view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI is freely available for download at www.github.com/t-brandt/acorns_-adi under a BSD license

  8. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    NASA Astrophysics Data System (ADS)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory Digital Image Processing, A Remote Sensing Perspective" authored by John Jensen. The textbook is widely adopted in the geography departments around the world for training students on digital processing of remote sensing images. In the traditional teaching setting for the course, the instructor prepares a set of sample remote sensing images to be used for the course. Commercial desktop remote sensing software, such as ERDAS, is used for students to do the lab exercises. The students have to do the excurses in the lab and can only use the simple images. For this specific course at GMU, we developed GeoBrain-based lab excurses for the course. With GeoBrain, students now can explore petabytes of remote sensing images in the NASA, NOAA, and USGS data archives instead of dealing only with sample images. Students have a much more powerful computing facility available for their lab excurses. They can explore the data and do the excurses any time at any place they want as long as they can access the Internet through the Web Browser. The feedbacks from students are all very positive about the learning experience on the digital image processing with the help of GeoBrain web processing services. The teaching/lab materials and GeoBrain services are freely available to anyone at http://www.laits.gmu.edu.

  9. An Automated Scheme for the Large-Scale Survey of Herbig-Haro Objects

    NASA Astrophysics Data System (ADS)

    Deng, Licai; Yang, Ji; Zheng, Zhongyuan; Jiang, Zhaoji

    2001-04-01

    Owing to their spectral properties, Herbig-Haro (HH) objects can be discovered using photometric methods through a combination of filters, sampling the characteristic spectral lines and the nearby continuum. The data are commonly processed through direct visual inspection of the images. To make data reduction more efficient and the results more uniform and complete, an automated searching scheme for HH objects is developed to manipulate the images using IRAF. This approach helps to extract images with only intrinsic HH emissions. By using this scheme, the pointlike stellar sources and extended nebulous sources with continuum emission can be eliminated from the original images. The objects with only characteristic HH emission become prominent and can be easily picked up. In this paper our scheme is illustrated by a sample field and has been applied to our surveys for HH objects.

  10. A platform for European CMOS image sensors for space applications

    NASA Astrophysics Data System (ADS)

    Minoglou, K.; San Segundo Bello, D.; Sabuncuoglu Tezcan, D.; Haspeslagh, L.; Van Olmen, J.; Merry, B.; Cavaco, C.; Mazzamuto, F.; Toqué-Trésonne, I.; Moirin, R.; Brouwer, M.; Toccafondi, M.; Preti, G.; Rosmeulen, M.; De Moor, P.

    2017-11-01

    Both ESA and the EC have identified the need for a supply chain of CMOS imagers for space applications which uses solely European sources. An essential requirement on this supply chain is the platformization of the process modules, in particular when it comes to very specific processing steps, such as those required for the manufacturing of backside illuminated image sensors. This is the goal of the European (EC/FP7/SPACE) funded project EUROCIS. All EUROCIS partners have excellent know-how and track record in the expertise fields required. Imec has been leading the imager chip design and the front side and backside processing. LASSE, as a major player in the laser annealing supplier sector, has been focusing on the optimization of the process related to the backside passivation of the image sensors. TNO, known worldwide as a top developer of instruments for scientific research, including space research and sensors for satellites, has contributed in the domain of optical layers for space instruments and optimized antireflective coatings. Finally, Selex ES, as a world-wide leader for manufacturing instruments with expertise in various space missions and programs, has defined the image sensor specifications and is taking care of the final device characterization. In this paper, an overview of the process flow, the results on test structures and imagers processed using this platform will be presented.

  11. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  12. Formation of the image on the receiver of thermal radiation

    NASA Astrophysics Data System (ADS)

    Akimenko, Tatiana A.

    2018-04-01

    The formation of the thermal picture of the observed scene with the verification of the quality of the thermal images obtained is one of the important stages of the technological process that determine the quality of the thermal imaging observation system. In this article propose to consider a model for the formation of a thermal picture of a scene, which must take into account: the features of the object of observation as the source of the signal; signal transmission through the physical elements of the thermal imaging system that produce signal processing at the optical, photoelectronic and electronic stages, which determines the final parameters of the signal and its compliance with the requirements for thermal information and measurement systems.

  13. Different methods of image segmentation in the process of meat marbling evaluation

    NASA Astrophysics Data System (ADS)

    Ludwiczak, A.; Ślósarz, P.; Lisiak, D.; Przybylak, A.; Boniecki, P.; Stanisz, M.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Janczak, D.; Bykowska, M.

    2015-07-01

    The level of marbling in meat assessment based on digital images is very popular, as computer vision tools are becoming more and more advanced. However considering muscle cross sections as the data source for marbling level evaluation, there are still a few problems to cope with. There is a need for an accurate method which would facilitate this evaluation procedure and increase its accuracy. The presented research was conducted in order to compare the effect of different image segmentation tools considering their usefulness in meat marbling evaluation on the muscle anatomical cross - sections. However this study is considered to be an initial trial in the presented field of research and an introduction to ultrasonic images processing and analysis.

  14. Concrete Slump Classification using GLCM Feature Extraction

    NASA Astrophysics Data System (ADS)

    Andayani, Relly; Madenda, Syarifudin

    2016-05-01

    Digital image processing technologies have been widely applies in analyzing concrete structure because the accuracy and real time result. The aim of this study is to classify concrete slump by using image processing technique. For this purpose, concrete mix design of 30 MPa compression strength designed with slump of 0-10 mm, 10-30 mm, 30-60 mm, and 60-180 mm were analysed. Image acquired by Nikon Camera D-7000 using high resolution was set up. In the first step RGB converted to greyimage than cropped to 1024 x 1024 pixel. With open-source program, cropped images to be analysed to extract GLCM feature. The result shows for the higher slump contrast getting lower, but higher correlation, energy, and homogeneity.

  15. Remote measurement methods for 3-D modeling purposes using BAE Systems' Software

    NASA Astrophysics Data System (ADS)

    Walker, Stewart; Pietrzak, Arleta

    2015-06-01

    Efficient, accurate data collection from imagery is the key to an economical generation of useful geospatial products. Incremental developments of traditional geospatial data collection and the arrival of new image data sources cause new software packages to be created and existing ones to be adjusted to enable such data to be processed. In the past, BAE Systems' digital photogrammetric workstation, SOCET SET®, met fin de siècle expectations in data processing and feature extraction. Its successor, SOCET GXP®, addresses today's photogrammetric requirements and new data sources. SOCET GXP is an advanced workstation for mapping and photogrammetric tasks, with automated functionality for triangulation, Digital Elevation Model (DEM) extraction, orthorectification and mosaicking, feature extraction and creation of 3-D models with texturing. BAE Systems continues to add sensor models to accommodate new image sources, in response to customer demand. New capabilities added in the latest version of SOCET GXP facilitate modeling, visualization and analysis of 3-D features.

  16. Applying Standard Interfaces to a Process-Control Language

    NASA Technical Reports Server (NTRS)

    Berthold, Richard T.

    2005-01-01

    A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.

  17. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    NASA Astrophysics Data System (ADS)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.

  18. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging

    PubMed Central

    Ovesný, Martin; Křížek, Pavel; Borkovec, Josef; Švindrych, Zdeněk; Hagen, Guy M.

    2014-01-01

    Summary: ThunderSTORM is an open-source, interactive and modular plug-in for ImageJ designed for automated processing, analysis and visualization of data acquired by single-molecule localization microscopy methods such as photo-activated localization microscopy and stochastic optical reconstruction microscopy. ThunderSTORM offers an extensive collection of processing and post-processing methods so that users can easily adapt the process of analysis to their data. ThunderSTORM also offers a set of tools for creation of simulated data and quantitative performance evaluation of localization algorithms using Monte Carlo simulations. Availability and implementation: ThunderSTORM and the online documentation are both freely accessible at https://code.google.com/p/thunder-storm/ Contact: guy.hagen@lf1.cuni.cz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24771516

  19. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  20. An automated workflow for parallel processing of large multiview SPIM recordings.

    PubMed

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-04-01

    Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  1. Streamlined, Inexpensive 3D Printing of the Brain and Skull.

    PubMed

    Naftulin, Jason S; Kimchi, Eyal Y; Cash, Sydney S

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3-4 in consumable plastic filament as described, and the total process takes 14-17 hours, almost all of which is unsupervised (preprocessing = 4-6 hr; printing = 9-11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1-5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes.

  2. Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA

    NASA Astrophysics Data System (ADS)

    Staley, T. D.; Anderson, G. E.

    2015-11-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.

  3. [Color processing of ultrasonographic images in extracorporeal lithotripsy].

    PubMed

    Lardennois, B; Ziade, A; Walter, K

    1991-02-01

    A number of technical difficulties are encountered in the ultrasonographic detection of renal stones which unfortunately limit its performance. The margin of error of firing in extracorporeal shock-wave lithotripsy (ESWL) must be reduced to a minimum. The role of the ultrasonographic monitoring during lithotripsy is also essential: continuous control of the focussing of the short-wave beamand assessment if the quality of fragmentation. The authors propose to improve ultrasonographic imaging in ESWL by means of intraoperative colour processing of the stone. Each shot must be directed to its target with an economy of vision avoiding excessive fatigue. The principle of the technique consists of digitalization of the ultrasound video images using a Macintosh Mac 2 computer. The Graphis Paint II program is interfaced directly with the Quick Capture card and recovers the images on its work surface in real time. The program is then able to attribute to each of these 256 shades of grey any one of the 16.6 million colours of the Macintosh universe with specific intensity and saturation. During fragmentation, using the principle of a palette, the stone changes colour from green to red indicating complete fragmentation. A Color Space card converts the digital image obtained into a video analogue source which is visualized on the monitor. It can be superimposed and/or juxtaposed with the source image by means of a multi-standard mixing table. Colour processing of ultrasonographic images in extracoporeal shockwave lithotripsy allows better visualization of the stones and better follow-up of fragmentation and allows the shockwave treatment to be stopped earlier. It increases the stone-free performance at 6 months. This configuration will eventually be able to integrate into the ultrasound apparatus itself.

  4. Navigating Earthquake Physics with High-Resolution Array Back-Projection

    NASA Astrophysics Data System (ADS)

    Meng, Lingsen

    Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The high-frequency subevents are located at the edges of geodetic slip regions, which are correlated to the stopping phases associated with rupture speed reduction when the earthquake arrests.

  5. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  6. Exploiting passive polarimetric imagery for remote sensing applications

    NASA Astrophysics Data System (ADS)

    Vimal Thilak Krishna, Thilakam

    Polarization is a property of light or electromagnetic radiation that conveys information about the orientation of the transverse electric and magnetic fields. The polarization of reflected light complements other electromagnetic radiation attributes such as intensity, frequency, or spectral characteristics. A passive polarization based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. The polarization due to surface reflections from such objects contains information about the targets that can be exploited in remote sensing applications such as target detection, target classification, object recognition and shape extraction/recognition. In recent years, there has been renewed interest in the use of passive polarization information in remote sensing applications. The goal of our research is to design image processing algorithms for remote sensing applications by utilizing physics-based models that describe the polarization imparted by optical scattering from an object. In this dissertation, we present a method to estimate the complex index of refraction and reflection angle from multiple polarization measurements. This method employs a polarimetric bidirectional reflectance distribution function (pBRDF) that accounts for polarization due to specular scattering. The parameters of interest are derived by utilizing a nonlinear least squares estimation algorithm, and computer simulation results show that the estimation accuracy generally improves with an increasing number of source position measurements. Furthermore, laboratory results indicate that the proposed method is effective for recovering the reflection angle and that the estimated index of refraction provides a feature vector that is robust to the reflection angle. We also study the use of extracted index of refraction as a feature vector in designing two important image processing applications, namely image segmentation and material classification so that the resulting systems are largely invariant to illumination source location. This is in contrast to most passive polarization-based image processing algorithms proposed in the literature that employ quantities such as Stokes vectors and the degree of polarization and which are not robust to changes in illumination conditions. The estimated index of refraction, on the other hand, is invariant to illumination conditions and hence can be used as an input to image processing algorithms. The proposed estimation framework also is extended to the case where the position of the observer (camera) moves between measurements while that of the source remains fixed. Finally, we explore briefly the topic of parameter estimation for a generalized model that accounts for both specular and volumetric scattering. A combination of simulation and experimental results are provided to evaluate the effectiveness of the above methods.

  7. The Müller-Lyer Illusion in a Computational Model of Biological Object Recognition

    PubMed Central

    Zeman, Astrid; Obst, Oliver; Brooks, Kevin R.; Rich, Anina N.

    2013-01-01

    Studying illusions provides insight into the way the brain processes information. The Müller-Lyer Illusion (MLI) is a classical geometrical illusion of size, in which perceived line length is decreased by arrowheads and increased by arrowtails. Many theories have been put forward to explain the MLI, such as misapplied size constancy scaling, the statistics of image-source relationships and the filtering properties of signal processing in primary visual areas. Artificial models of the ventral visual processing stream allow us to isolate factors hypothesised to cause the illusion and test how these affect classification performance. We trained a feed-forward feature hierarchical model, HMAX, to perform a dual category line length judgment task (short versus long) with over 90% accuracy. We then tested the system in its ability to judge relative line lengths for images in a control set versus images that induce the MLI in humans. Results from the computational model show an overall illusory effect similar to that experienced by human subjects. No natural images were used for training, implying that misapplied size constancy and image-source statistics are not necessary factors for generating the illusion. A post-hoc analysis of response weights within a representative trained network ruled out the possibility that the illusion is caused by a reliance on information at low spatial frequencies. Our results suggest that the MLI can be produced using only feed-forward, neurophysiological connections. PMID:23457510

  8. Stereo-Video Data Reduction of Wake Vortices and Trailing Aircrafts

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel

    1998-01-01

    This report presents stereo image theory and the corresponding image processing software developed to analyze stereo imaging data acquired for the wake-vortex hazard flight experiment conducted at NASA Langley Research Center. In this experiment, a leading Lockheed C-130 was equipped with wing-tip smokers to visualize its wing vortices, while a trailing Boeing 737 flew into the wake vortices of the leading airplane. A Rockwell OV-10A airplane, fitted with video cameras under its wings, flew at 400 to 1000 feet above and parallel to the wakes, and photographed the wake interception process for the purpose of determining the three-dimensional location of the trailing aircraft relative to the wake. The report establishes the image-processing tools developed to analyze the video flight-test data, identifies sources of potential inaccuracies, and assesses the quality of the resultant set of stereo data reduction.

  9. Applications of High-speed motion analysis system on Solid Rocket Motor (SRM)

    NASA Astrophysics Data System (ADS)

    Liu, Yang; He, Guo-qiang; Li, Jiang; Liu, Pei-jin; Chen, Jian

    2007-01-01

    High-speed motion analysis system could record images up to 12,000fps and analyzed with the image processing system. The system stored data and images directly in electronic memory convenient for managing and analyzing. The high-speed motion analysis system and the X-ray radiography system were established the high-speed real-time X-ray radiography system, which could diagnose and measure the dynamic and high-speed process in opaque. The image processing software was developed for improve quality of the original image for acquiring more precise information. The typical applications of high-speed motion analysis system on solid rocket motor (SRM) were introduced in the paper. The research of anomalous combustion of solid propellant grain with defects, real-time measurement experiment of insulator eroding, explosion incision process of motor, structure and wave character of plume during the process of ignition and flameout, measurement of end burning of solid propellant, measurement of flame front and compatibility between airplane and missile during the missile launching were carried out using high-speed motion analysis system. The significative results were achieved through the research. Aim at application of high-speed motion analysis system on solid rocket motor, the key problem, such as motor vibrancy, electrical source instability, geometry aberrance, and yawp disturbance, which damaged the image quality, was solved. The image processing software was developed which improved the capability of measuring the characteristic of image. The experimental results showed that the system was a powerful facility to study instantaneous and high-speed process in solid rocket motor. With the development of the image processing technique, the capability of high-speed motion analysis system was enhanced.

  10. In-Process Thermal Imaging of the Electron Beam Freeform Fabrication Process

    NASA Technical Reports Server (NTRS)

    Taminger, Karen M.; Domack, Christopher S.; Zalameda, Joseph N.; Taminger, Brian L.; Hafley, Robert A.; Burke, Eric R.

    2016-01-01

    Researchers at NASA Langley Research Center have been developing the Electron Beam Freeform Fabrication (EBF3) metal additive manufacturing process for the past 15 years. In this process, an electron beam is used as a heat source to create a small molten pool on a substrate into which wire is fed. The electron beam and wire feed assembly are translated with respect to the substrate to follow a predetermined tool path. This process is repeated in a layer-wise fashion to fabricate metal structural components. In-process imaging has been integrated into the EBF3 system using a near-infrared (NIR) camera. The images are processed to provide thermal and spatial measurements that have been incorporated into a closed-loop control system to maintain consistent thermal conditions throughout the build. Other information in the thermal images is being used to assess quality in real time by detecting flaws in prior layers of the deposit. NIR camera incorporation into the system has improved the consistency of the deposited material and provides the potential for real-time flaw detection which, ultimately, could lead to the manufacture of better, more reliable components using this additive manufacturing process.

  11. Review of current progress in nanometrology with the helium ion microscope

    NASA Astrophysics Data System (ADS)

    Postek, Michael T.; Vladár, András; Archie, Charles; Ming, Bin

    2011-02-01

    Scanning electron microscopy has been employed as an imaging and measurement tool for more than 50 years and it continues as a primary tool in many research and manufacturing facilities across the world. A new challenger to this work is the helium ion microscope (HIM). The HIM is a new imaging and metrology technology. Essentially, substitution of the electron source with a helium ion source yields a tool visually similar in function to the scanning electron microscope, but very different in the fundamental imaging and measurement process. The imaged and measured signal originates differently than in the scanning electron microscope and that fact and its single atom source diameter may be able to push the obtainable resolution lower, provide greater depth-of-field and ultimately improve the metrology. Successful imaging and metrology with this instrument entails understanding and modeling of new ion beam/specimen interaction physics. As a new methodology, HIM is beginning to show promise and the abundance of potentially advantageous applications for nanometrology has yet to be fully exploited. This paper discusses some of the progress made at NIST in collaboration with IBM to understand the science behind this new technology.

  12. A review of biomedical multiphoton microscopy and its laser sources

    NASA Astrophysics Data System (ADS)

    Lefort, Claire

    2017-10-01

    Multiphoton microscopy (MPM) has been the subject of major development efforts for about 25 years for imaging biological specimens at micron scale and presented as an elegant alternative to classical fluorescence methods such as confocal microscopy. In this topical review, the main interests and technical requirements of MPM are addressed with a focus on the crucial role of excitation source for optimization of multiphoton processes. Then, an overview of the different sources successfully demonstrated in literature for MPM is presented, and their physical parameters are inventoried. A classification of these sources in function with their ability to optimize multiphoton processes is proposed, following a protocol found in literature. Starting from these considerations, a suggestion of a possible identikit of the ideal laser source for MPM concludes this topical review. Dedicated to Martin.

  13. Quantification technology study on flaws in steam-filled pipelines based on image processing

    NASA Astrophysics Data System (ADS)

    Sun, Lina; Yuan, Peixin

    2009-07-01

    Starting from exploiting the applied detection system of gas transmission pipeline, a set of X-ray image processing methods and pipeline flaw quantificational evaluation methods are proposed. Defective and non-defective strings and rows in gray image were extracted and oscillogram was obtained. We can distinguish defects in contrast with two gray images division. According to the gray value of defects with different thicknesses, the gray level depth curve is founded. Through exponential and polynomial fitting way to obtain the attenuation mathematical model which the beam penetrates pipeline, thus attain flaw deep dimension. This paper tests on the PPR pipe in the production of simulated holes flaw and cracks flaw, 135KV used the X-ray source on the testing. Test results show that X-ray image processing method, which meet the needs of high efficient flaw detection and provide quality safeguard for thick oil recovery, can be used successfully in detecting corrosion of insulated pipe.

  14. Quantification technology study on flaws in steam-filled pipelines based on image processing

    NASA Astrophysics Data System (ADS)

    Yuan, Pei-xin; Cong, Jia-hui; Chen, Bo

    2008-03-01

    Starting from exploiting the applied detection system of gas transmission pipeline, a set of X-ray image processing methods and pipeline flaw quantificational evaluation methods are proposed. Defective and non-defective strings and rows in gray image were extracted and oscillogram was obtained. We can distinguish defects in contrast with two gray images division. According to the gray value of defects with different thicknesses, the gray level depth curve is founded. Through exponential and polynomial fitting way to obtain the attenuation mathematical model which the beam penetrates pipeline, thus attain flaw deep dimension. This paper tests on the PPR pipe in the production of simulated holes flaw and cracks flaw. The X-ray source tube voltage was selected as 130kv and valve current was 1.5mA.Test results show that X-ray image processing methods, which meet the needs of high efficient flaw detection and provide quality safeguard for thick oil recovery, can be used successfully in detecting corrosion of insulated pipe.

  15. Real-Time Noise Removal for Line-Scanning Hyperspectral Devices Using a Minimum Noise Fraction-Based Approach

    PubMed Central

    Bjorgan, Asgeir; Randeberg, Lise Lyngsnes

    2015-01-01

    Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717

  16. Wide Field Collimator 2 (WFC2) for GOES Imager and Sounder

    NASA Technical Reports Server (NTRS)

    Etemad, Shahriar; Bremer, James C.; Zukowski, Barbara J.; Pasquale, Bert A.; zukowski, Tmitri J.; Prince, Robert E.; O'Neill, Patrick A.; Ross, Robert W.

    2004-01-01

    Two of the GOES instruments, the Imager and the Sounder, perform scans of the Earth to provide a full disc picture of the Earth. To verify the entire scan process, an image of a target that covers an 18 deg. circular field-of-view is collimated and projected into the field of regard of each instrument. The Wide Field Collimator 2 (WFC2) has many advantages over its predecessor, WFC1, including lower thermal dissipation higher fir field MTF, smaller package, and a more intuitive (faster) focusing process. The illumination source is an LED array that emits in a narrow spectral band centered at 689 nm, within the visible spectral bands of the Imager and Sounder. The illumination level can be continuously adjusted electronically. Lower thermal dissipation eliminates the need for forced convection cooling and minimizes time to reach thermal stability. The lens system has been optimized for the illumination source spectral output and athernalized to remain in focus during bulk temperature changes within the laboratory environment. The MTF of the lens is higher than that of the WFC1 at the edge of FOV. The target is focused in three orthogonal motions, controlled by an ergonomic system that saves substantial time and produces a sharper focus. Key words: Collimator, GOES, Imager, Sounder, Projector

  17. An application of stereoscopy and image processing in forensics: recovering obliterated firearms serial number

    NASA Astrophysics Data System (ADS)

    da Silva Nunes, L. C.; dos Santos, Paulo Acioly M.

    2004-10-01

    We present an application of the use of stereoscope to recovering obliterated firearms serial number. We investigate a promising new combined cheap method using both non-destructive and destructive techniques. With the use of a stereomicroscope coupled with a digital camera and a flexible cold light source, we can capture the image of the damaged area, and with continuous polishing and sometimes with the help of image processing techniques we could enhance the observed images and they can also be recorded as evidence. This method has already proven to be useful, in certain cases, in aluminum dotted pistol frames, whose serial number is printed with a laser, when etching techniques are not successful. We can also observe acid treated steel surfaces and enhance the images of recovered serial numbers, which sometimes lack of definition.

  18. CognitionMaster: an object-based image analysis framework

    PubMed Central

    2013-01-01

    Background Automated image analysis methods are becoming more and more important to extract and quantify image features in microscopy-based biomedical studies and several commercial or open-source tools are available. However, most of the approaches rely on pixel-wise operations, a concept that has limitations when high-level object features and relationships between objects are studied and if user-interactivity on the object-level is desired. Results In this paper we present an open-source software that facilitates the analysis of content features and object relationships by using objects as basic processing unit instead of individual pixels. Our approach enables also users without programming knowledge to compose “analysis pipelines“ that exploit the object-level approach. We demonstrate the design and use of example pipelines for the immunohistochemistry-based cell proliferation quantification in breast cancer and two-photon fluorescence microscopy data about bone-osteoclast interaction, which underline the advantages of the object-based concept. Conclusions We introduce an open source software system that offers object-based image analysis. The object-based concept allows for a straight-forward development of object-related interactive or fully automated image analysis solutions. The presented software may therefore serve as a basis for various applications in the field of digital image analysis. PMID:23445542

  19. Direct Imaging of Stellar Surfaces: Results from the Stellar Imager (SI) Vision Mission Study

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth; Schrijver, Carolus; Karovska, Margarita

    2006-01-01

    The Stellar Imager (SI) is a UV-Optical, Space-Based Interferometer designed to enable 0.1 milli-arcsecond (mas) spectral imaging of stellar surfaces and stellar interiors (via asteroseismology) and of the Universe in general. SI is identified as a "Flagship and Landmark Discovery Mission'' in the 2005 Sun Solar System Connection (SSSC) Roadmap and as a candidate for a "Pathways to Life Observatory'' in the Exploration of the Universe Division (EUD) Roadmap (May, 2005). The ultra-sharp images of the Stellar Imager will revolutionize our view of many dynamic astrophysical processes: The 0.1 mas resolution of this deep-space telescope will transform point sources into extended sources, and snapshots into evolving views. SI's science focuses on the role of magnetism in the Universe, particularly on magnetic activity on the surfaces of stars like the Sun. SI's prime goal is to enable long-term forecasting of solar activity and the space weather that it drives in support of the Living With a Star program in the Exploration Era. SI will also revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magneto-hydrodynamically controlled processes in the Universe. In this paper we will discuss the results of the SI Vision Mission Study, elaborating on the science goals of the SI Mission and a mission architecture that could meet those goals.

  20. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, X; Liu, L; Xing, L

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less

  1. DYI digital holography

    NASA Astrophysics Data System (ADS)

    Zacharovas, Stanislovas; Nikolskij, Andrej; Kuchin, Jevgenij

    2011-02-01

    We have created a programming tool which uses image data provided by webcam connected to personal computer and gives user an ability to see the future digital hologram preview on his computer screen, before sending video data to holographic printing companies. In order to print digital hologram, one needs to have a sequence of images of the same scene taken from different angles and nowadays web cameras - stand-alone or incorporated into mobile computer, can be an acceptable source of such image sequences. In this article we are describing this DIY holographic imaging process in details.

  2. Applying local binary patterns in image clustering problems

    NASA Astrophysics Data System (ADS)

    Skorokhod, Nikolai N.; Elizarov, Alexey I.

    2017-11-01

    Due to the fact that the cloudiness plays a critical role in the Earth radiative balance, the study of the distribution of different types of clouds and their movements is relevant. The main sources of such information are artificial satellites that provide data in the form of images. The most commonly used method of solving tasks of processing and classification of images of clouds is based on the description of texture features. The use of a set of local binary patterns is proposed to describe the texture image.

  3. Artefacts found in computed radiography.

    PubMed

    Cesar, L J; Schueler, B A; Zink, F E; Daly, T R; Taubel, J P; Jorgenson, L L

    2001-02-01

    Artefacts on radiographic images are distracting and may compromise accurate diagnosis. Although most artefacts that occur in conventional radiography have become familiar, computed radiography (CR) systems produce artefacts that differ from those found in conventional radiography. We have encountered a variety of artefacts in CR images that were produced from four different models plate reader. These artefacts have been identified and traced to the imaging plate, plate reader, image processing software or laser printer or to operator error. Understanding the potential sources of CR artefacts will aid in identifying and resolving problems quickly and help prevent future occurrences.

  4. Coherent diffractive imaging methods for semiconductor manufacturing

    NASA Astrophysics Data System (ADS)

    Helfenstein, Patrick; Mochi, Iacopo; Rajeev, Rajendran; Fernandez, Sara; Ekinci, Yasin

    2017-12-01

    The paradigm shift of the semiconductor industry moving from deep ultraviolet to extreme ultraviolet lithography (EUVL) brought about new challenges in the fabrication of illumination and projection optics, which constitute one of the core sources of cost of ownership for many of the metrology tools needed in the lithography process. For this reason, lensless imaging techniques based on coherent diffractive imaging started to raise interest in the EUVL community. This paper presents an overview of currently on-going research endeavors that use a number of methods based on lensless imaging with coherent light.

  5. Imaging System With Confocally Self-Detecting Laser.

    DOEpatents

    Webb, Robert H.; Rogomentich, Fran J.

    1996-10-08

    The invention relates to a confocal laser imaging system and method. The system includes a laser source, a beam splitter, focusing elements, and a photosensitive detector. The laser source projects a laser beam along a first optical path at an object to be imaged, and modulates the intensity of the projected laser beam in response to light reflected from the object. A beam splitter directs a portion of the projected laser beam onto a photodetector. The photodetector monitors the intensity of laser output. The laser source can be an electrically scannable array, with a lens or objective assembly for focusing light generated by the array onto the object of interest. As the array is energized, its laser beams scan over the object, and light reflected at each point is returned by the lens to the element of the array from which it originated. A single photosensitive detector element can generate an intensity-representative signal for all lasers of the array. The intensity-representative signal from the photosensitive detector can be processed to provide an image of the object of interest.

  6. Contribution of non-negative matrix factorization to the classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  7. SCIFIO: an extensible framework to support scientific image formats.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-12-07

    No gold standard exists in the world of scientific image acquisition; a proliferation of instruments each with its own proprietary data format has made out-of-the-box sharing of that data nearly impossible. In the field of light microscopy, the Bio-Formats library was designed to translate such proprietary data formats to a common, open-source schema, enabling sharing and reproduction of scientific results. While Bio-Formats has proved successful for microscopy images, the greater scientific community was lacking a domain-independent framework for format translation. SCIFIO (SCientific Image Format Input and Output) is presented as a freely available, open-source library unifying the mechanisms of reading and writing image data. The core of SCIFIO is its modular definition of formats, the design of which clearly outlines the components of image I/O to encourage extensibility, facilitated by the dynamic discovery of the SciJava plugin framework. SCIFIO is structured to support coexistence of multiple domain-specific open exchange formats, such as Bio-Formats' OME-TIFF, within a unified environment. SCIFIO is a freely available software library developed to standardize the process of reading and writing scientific image formats.

  8. Integration of optical imaging with a small animal irradiator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weersink, Robert A., E-mail: robert.weersink@rmp.uhn.on.ca; Ansell, Steve; Wang, An

    Purpose: The authors describe the integration of optical imaging with a targeted small animal irradiator device, focusing on design, instrumentation, 2D to 3D image registration, 2D targeting, and the accuracy of recovering and mapping the optical signal to a 3D surface generated from the cone-beam computed tomography (CBCT) imaging. The integration of optical imaging will improve targeting of the radiation treatment and offer longitudinal tracking of tumor response of small animal models treated using the system. Methods: The existing image-guided small animal irradiator consists of a variable kilovolt (peak) x-ray tube mounted opposite an aSi flat panel detector, both mountedmore » on a c-arm gantry. The tube is used for both CBCT imaging and targeted irradiation. The optical component employs a CCD camera perpendicular to the x-ray treatment/imaging axis with a computer controlled filter for spectral decomposition. Multiple optical images can be acquired at any angle as the gantry rotates. The optical to CBCT registration, which uses a standard pinhole camera model, was modeled and tested using phantoms with markers visible in both optical and CBCT images. Optically guided 2D targeting in the anterior/posterior direction was tested on an anthropomorphic mouse phantom with embedded light sources. The accuracy of the mapping of optical signal to the CBCT surface was tested using the same mouse phantom. A surface mesh of the phantom was generated based on the CBCT image and optical intensities projected onto the surface. The measured surface intensity was compared to calculated surface for a point source at the actual source position. The point-source position was also optimized to provide the closest match between measured and calculated intensities, and the distance between the optimized and actual source positions was then calculated. This process was repeated for multiple wavelengths and sources. Results: The optical to CBCT registration error was 0.8 mm. Two-dimensional targeting of a light source in the mouse phantom based on optical imaging along the anterior/posterior direction was accurate to 0.55 mm. The mean square residual error in the normalized measured projected surface intensities versus the calculated normalized intensities ranged between 0.0016 and 0.006. Optimizing the position reduced this error from 0.00016 to 0.0004 with distances ranging between 0.7 and 1 mm between the actual and calculated position source positions. Conclusions: The integration of optical imaging on an existing small animal irradiation platform has been accomplished. A targeting accuracy of 1 mm can be achieved in rigid, homogeneous phantoms. The combination of optical imaging with a CBCT image-guided small animal irradiator offers the potential to deliver functionally targeted dose distributions, as well as monitor spatial and temporal functional changes that occur with radiation therapy.« less

  9. Land use mapping from CBERS-2 images with open source tools by applying different classification algorithms

    NASA Astrophysics Data System (ADS)

    Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.

    2016-02-01

    Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.

  10. The GALEX Catalog of UV Sources in the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Thilker, David A.; Bianchi, L.; Simons, R.

    2014-01-01

    The Galaxy Evolution Explorer (GALEX) has performed unprecedented imaging surveys of the Magellanic Clouds (MC) and their surrounding areas including the Magellanic Bridge (MB) in near-UV (NUV, 1771-2831 Å) and far-UV (FUV, 1344-1786 Å) bands at 5″ resolution. Substantially more area was covered in the NUV than FUV, particularly in the bright central regions, because of the GALEX FUV detector failure. The 5σ depth of the NUV imaging varies between 20.8 and 22.7 (ABmag). Such imaging provides the first sensitive view of the entire content of hot stars in the Magellanic System, revealing the presence of young populations even in sites with extremely low star-formation rate surface density like the MB, owing to high sensitivity of the UV data to hot stars and the dark sky at these wavelengths. Crowding limits the quality of source detection and photometry from the standard mission pipeline processing. Therefore, we performed custom PSF-fitting photometry of the GALEX data in the MC survey region (<15° from the LMC, <10° from the SMC). After merging multiple detections of sources in overlapping images, the resulting catalog we have produced contains many million unique NUV point sources. This poster provides a first look at the GALEX MC survey and highlights some of the science investigations that the catalog and imaging dataset will make possible.

  11. Do solar decimetric spikes originate in coronal X-ray sources?

    NASA Astrophysics Data System (ADS)

    Battaglia, M.; Benz, A. O.

    2009-06-01

    Context: In the standard solar flare scenario, a large number of particles are accelerated in the corona. Nonthermal electrons emit both X-rays and radio waves. Thus, correlated signatures of the acceleration process are predicted at both wavelengths, coinciding either close to the footpoints of a magnetic loop or near the coronal X-ray source. Aims: We attempt to study the spatial connection between coronal X-ray emission and decimetric radio spikes to determine the site and geometry of the acceleration process. Methods: The positions of radio-spike sources and coronal X-ray sources are determined and analyzed in a well-observed limb event. Radio spikes are identified in observations from the Phoenix-2 spectrometer. Data from the Nançay radioheliograph are used to determine the position of the radio spikes. RHESSI images in soft and hard X-ray wavelengths are used to determine the X-ray flare geometry. Those observations are complemented by images from GOES/SXI. Results: We find that the radio emission originates at altitudes much higher than the coronal X-ray source, having an offset from the coronal X-ray source amounting to 90´´ and to 113´´ and 131´´ from the two footpoints, averaged over time and frequency. Conclusions: Decimetric spikes do not originate from coronal X-ray flare sources contrary to previous expectations. However, the observations suggest a causal link between the coronal X-ray source, related to the major energy release site, and simultaneous activity in the higher corona.

  12. Ground control requirements for precision processing of ERTS images

    USGS Publications Warehouse

    Burger, Thomas C.

    1973-01-01

    With the successful flight of the ERTS-1 satellite, orbital height images are available for precision processing into products such as 1:1,000,000-scale photomaps and enlargements up to 1:250,000 scale. In order to maintain positional error below 100 meters, control points for the precision processing must be carefully selected, clearly definitive on photos in both X and Y. Coordinates of selected control points measured on existing ½ and 15-minute standard maps provide sufficient accuracy for any space imaging system thus far defined. This procedure references the points to accepted horizontal and vertical datums. Maps as small as 1:250,000 scale can be used as source material for coordinates, but to maintain the desired accuracy, maps of 1:100,000 and larger scale should be used when available.

  13. The Stellar Imager (SI) project: a deep space UV/Optical Interferometer (UVOI) to observe the Universe at 0.1 milli-arcsec angular resolution

    NASA Astrophysics Data System (ADS)

    Carpenter, Kenneth G.; Schrijver, Carolus J.; Karovska, Margarita

    2009-04-01

    The Stellar Imager (SI) is a space-based, UV/Optical Interferometer (UVOI) designed to enable 0.1 milli-arcsecond (mas) spectral imaging of stellar surfaces and of the Universe in general. It will also probe via asteroseismology flows and structures in stellar interiors. SI’s science focuses on the role of magnetism in the Universe and will revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magneto-hydrodynamically controlled processes, such as accretion, in the Universe. The ultra-sharp images of SI will revolutionize our view of many dynamic astrophysical processes by transforming point sources into extended sources, and snapshots into evolving views. SI is a “Flagship and Landmark Discovery Mission” in the 2005 Heliophysics Roadmap and a potential implementation of the UVOI in the 2006 Science Program for NASA’s Astronomy and Physics Division. We present here the science goals of the SI Mission, a mission architecture that could meet those goals, and the technology development needed to enable this mission. Additional information on SI can be found at: http://hires.gsfc.nasa.gov/si/

  14. The Stellar Imager (SI) Project: A Deep Space UV/Optical Interferometer (UVOI) to Observe the Universe at 0.1 Milli-Arcsec Angular Resolution

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth G.; Schrijver, Carolus J.; Karovska, Margarita

    2008-01-01

    The Stellar Imager (SI) is a space-based, UV/ Optical Interferometer (UVOI) designed to enable 0.1 milliarcsecond (mas) spectral imaging of stellar surfaces and of the Universe in general. It will also probe via asteroseismology flows and structures in stellar interiors. SI's science focuses on the role of magnetism in the Universe and will revolutionize our understanding, of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magneto-hydrodynamically controlled processes, such as accretion, in the Universe. The ultra-sharp images of SI will revolutionize our view of many dynamic astrophysical processes by transforming point sources into extended sources, and snapshots into evolving views. SI is a "Flagship and Landmark Discovery Mission" in the 2005 Heliophysics Roadmap and a potential implementation of the UVOI in the 2006 Science Program for NASA's Astronomy and Physics Division. We present here the science goals of the SI Mission, a mission architecture that could meet those goals, and the technology development needed to enable this missin. Additional information on SI can be found at: http://hires.gsfc.nasa.gov/si/.

  15. Terahertz parametric sources and imaging applications

    NASA Astrophysics Data System (ADS)

    Yamashita, M.; Ogawa, Y.; Otani, C.; Kawase, K.

    2005-12-01

    We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of LiNbO 3 or MgO-doped LiNbO 3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave sources with a simple configuration. We report the detailed characteristics of the oscillation and the radiation including tunability, spatial and temporal coherency, uni directivity, and efficiency. A Fourier transform limited THz-wave spectrum narrowing was achieved by introducing the injection seeding method. Further, we have developed a spectroscopic THz imaging system using a TPO, which allows detection and identification of drugs concealed in envelopes, by introducing the component spatial pattern analysis. Several images of the envelope are recorded at different THz frequencies and then processed. The final result is an image that reveals what substances are present in the envelope, in what quantity, and how they are distributed across the envelope area. The example presented here shows the identification of three drugs, two of which illegal, while one is an over-the-counter drug.

  16. Terahertz parametric sources and imaging applications

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Minamide, Hiroaki; Ito, Hiromasa

    2005-07-01

    We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of LiNbO3 or MgO-doped LiNbO3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave source with a simple configuration. We report the detailed characteristics of the oscillation and the radiation including tunability, spatial and temporal coherency, uni-directivity, and efficiency. A Fourier transform limited THz-wave spectrum narrowing was achieved by introducing the injection seeding method. Further, we have developed a spectroscopic THz imaging system using a THz-wave parametric oscillator, which allows detection and identification of drugs concealed in envelopes, by introducing the component spatial pattern analysis. Several images of the envelope are recorded at different THz frequencies and then processed. The final result is an image that reveals what substances are present in the envelope, in what quantity, and how they are distributed across the envelope area. The example presented here shows the identification of three drugs, two of which are illegal, while one is an over-the-counter drug.

  17. A simple and low-cost biofilm quantification method using LED and CMOS image sensor.

    PubMed

    Kwak, Yeon Hwa; Lee, Junhee; Lee, Junghoon; Kwak, Soo Hwan; Oh, Sangwoo; Paek, Se-Hwan; Ha, Un-Hwan; Seo, Sungkyu

    2014-12-01

    A novel biofilm detection platform, which consists of a cost-effective red, green, and blue light-emitting diode (RGB LED) as a light source and a lens-free CMOS image sensor as a detector, is designed. This system can measure the diffraction patterns of cells from their shadow images, and gather light absorbance information according to the concentration of biofilms through a simple image processing procedure. Compared to a bulky and expensive commercial spectrophotometer, this platform can provide accurate and reproducible biofilm concentration detection and is simple, compact, and inexpensive. Biofilms originating from various bacterial strains, including Pseudomonas aeruginosa (P. aeruginosa), were tested to demonstrate the efficacy of this new biofilm detection approach. The results were compared with the results obtained from a commercial spectrophotometer. To utilize a cost-effective light source (i.e., an LED) for biofilm detection, the illumination conditions were optimized. For accurate and reproducible biofilm detection, a simple, custom-coded image processing algorithm was developed and applied to a five-megapixel CMOS image sensor, which is a cost-effective detector. The concentration of biofilms formed by P. aeruginosa was detected and quantified by varying the indole concentration, and the results were compared with the results obtained from a commercial spectrophotometer. The correlation value of the results from those two systems was 0.981 (N = 9, P < 0.01) and the coefficients of variation (CVs) were approximately threefold lower at the CMOS image-sensor platform. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Neurient: An Algorithm for Automatic Tracing of Confluent Neuronal Images to Determine Alignment

    PubMed Central

    Mitchel, J.A.; Martin, I.S.

    2013-01-01

    A goal of neural tissue engineering is the development and evaluation of materials that guide neuronal growth and alignment. However, the methods available to quantitatively evaluate the response of neurons to guidance materials are limited and/or expensive, and may require manual tracing to be performed by the researcher. We have developed an open source, automated Matlab-based algorithm, building on previously published methods, to trace and quantify alignment of fluorescent images of neurons in culture. The algorithm is divided into three phases, including computation of a lookup table which contains directional information for each image, location of a set of seed points which may lie along neurite centerlines, and tracing neurites starting with each seed point and indexing into the lookup table. This method was used to obtain quantitative alignment data for complex images of densely cultured neurons. Complete automation of tracing allows for unsupervised processing of large numbers of images. Following image processing with our algorithm, available metrics to quantify neurite alignment include angular histograms, percent of neurite segments in a given direction, and mean neurite angle. The alignment information obtained from traced images can be used to compare the response of neurons to a range of conditions. This tracing algorithm is freely available to the scientific community under the name Neurient, and its implementation in Matlab allows a wide range of researchers to use a standardized, open source method to quantitatively evaluate the alignment of dense neuronal cultures. PMID:23384629

  19. Image retrieval and processing system version 2.0 development work

    NASA Technical Reports Server (NTRS)

    Slavney, Susan H.; Guinness, Edward A.

    1991-01-01

    The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.

  20. Massive stereo-based DTM production for Mars on cloud computers

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Xiong, Si-Ting; Putri, A. R. D.; Walter, S. H. G.; Veitch-Michaelis, J.; Yershov, V.

    2018-05-01

    Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system.

  1. Depth-encoded all-fiber swept source polarization sensitive OCT

    PubMed Central

    Wang, Zhao; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Lee, ByungKun; Choi, WooJhon; Potsaid, Benjamin; Liu, Jonathan; Jayaraman, Vijaysekhar; Cable, Alex; Kraus, Martin F.; Liang, Kaicheng; Hornegger, Joachim; Fujimoto, James G.

    2014-01-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of conventional OCT and can assess depth-resolved tissue birefringence in addition to intensity. Most existing PS-OCT systems are relatively complex and their clinical translation remains difficult. We present a simple and robust all-fiber PS-OCT system based on swept source technology and polarization depth-encoding. Polarization multiplexing was achieved using a polarization maintaining fiber. Polarization sensitive signals were detected using fiber based polarization beam splitters and polarization controllers were used to remove the polarization ambiguity. A simplified post-processing algorithm was proposed for speckle noise reduction relaxing the demand for phase stability. We demonstrated systems design for both ophthalmic and catheter-based PS-OCT. For ophthalmic imaging, we used an optical clock frequency doubling method to extend the imaging range of a commercially available short cavity light source to improve polarization depth-encoding. For catheter based imaging, we demonstrated 200 kHz PS-OCT imaging using a MEMS-tunable vertical cavity surface emitting laser (VCSEL) and a high speed micromotor imaging catheter. The system was demonstrated in human retina, finger and lip imaging, as well as ex vivo swine esophagus and cardiovascular imaging. The all-fiber PS-OCT is easier to implement and maintain compared to previous PS-OCT systems and can be more easily translated to clinical applications due to its robust design. PMID:25401008

  2. Youpi: A Web-based Astronomical Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Monnerville, M.; Sémah, G.

    2010-12-01

    Youpi stands for “YOUpi is your processing PIpeline”. It is a portable, easy to use web application providing high level functionalities to perform data reduction on scientific FITS images. It is built on top of open source processing tools that are released to the community by Terapix, in order to organize your data on a computer cluster, to manage your processing jobs in real time and to facilitate teamwork by allowing fine-grain sharing of results and data. On the server side, Youpi is written in the Python programming language and uses the Django web framework. On the client side, Ajax techniques are used along with the Prototype and script.aculo.us Javascript librairies.

  3. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  4. Swept source optical coherence microscopy using a 1310 nm VCSEL light source

    PubMed Central

    Ahsen, Osman O.; Tao, Yuankai K.; Potsaid, Benjamin M.; Sheikine, Yuri; Jiang, James; Grulkowski, Ireneusz; Tsai, Tsung-Han; Jayaraman, Vijaysekhar; Kraus, Martin F.; Connolly, James L.; Hornegger, Joachim; Cable, Alex; Fujimoto, James G.

    2013-01-01

    We demonstrate high speed, swept source optical coherence microscopy (OCM) using a MEMS tunable vertical cavity surface-emitting laser (VCSEL) light source. The light source had a sweep rate of 280 kHz, providing a bidirectional axial scan rate of 560 kHz. The sweep bandwidth was 117 nm centered at 1310 nm, corresponding to an axial resolution of 13.1 µm in air, corresponding to 8.1 µm (9.6 µm spectrally shaped) in tissue. Dispersion mismatch from different objectives was compensated numerically, enabling magnification and field of view to be easily changed. OCM images were acquired with transverse resolutions between 0.86 µm - 3.42 µm using interchangeable 40X, 20X and 10X objectives with ~600 µm x 600 µm, ~1 mm x 1 mm and ~2 mm x 2 mm field-of-view (FOV), respectively. Parasitic variations in path length with beam scanning were corrected numerically. These features enable swept source OCM to be integrated with a wide range of existing scanning microscopes. Large FOV mosaics were generated by serially acquiring adjacent overlapping microscopic fields and combining them in post-processing. Fresh human colon, thyroid and kidney specimens were imaged ex vivo and compared to matching histology sections, demonstrating the ability of OCM to image tissue specimens. PMID:23938673

  5. Vision-based system for the control and measurement of wastewater flow rate in sewer systems.

    PubMed

    Nguyen, L S; Schaeli, B; Sage, D; Kayal, S; Jeanbourquin, D; Barry, D A; Rossi, L

    2009-01-01

    Combined sewer overflows and stormwater discharges represent an important source of contamination to the environment. However, the harsh environment inside sewers and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. In the following, we present and evaluate an in situ system for the monitoring of water flow in sewers based on video images. This paper focuses on the measurement of the water level based on image-processing techniques. The developed image-based water level algorithms identify the wall/water interface from sewer images and measure its position with respect to real world coordinates. A web-based user interface and a 3-tier system architecture enable the remote configuration of the cameras and the image-processing algorithms. Images acquired and processed by our system were found to reliably measure water levels and thereby to provide crucial information leading to better understand particular hydraulic behaviors. In terms of robustness and accuracy, the water level algorithm provided equal or better results compared to traditional water level probes in three different in situ configurations.

  6. Spectral Imaging Technology-Based Evaluation of Radiation Treatment Planning to Remove Contrast Agent Artifacts.

    PubMed

    Yi-Qun, Xu; Wei, Liu; Xin-Ye, Ni

    2016-10-01

    This study employs dual-source computed tomography single-spectrum imaging to evaluate the effects of contrast agent artifact removal and the computational accuracy of radiotherapy treatment planning improvement. The phantom, including the contrast agent, was used in all experiments. The amounts of iodine in the contrast agent were 30, 15, 7.5, and 0.75 g/100 mL. Two images with different energy values were scanned and captured using dual-source computed tomography (80 and 140 kV). To obtain a fused image, 2 groups of images were processed using single-energy spectrum imaging technology. The Pinnacle planning system was used to measure the computed tomography values of the contrast agent and the surrounding phantom tissue. The difference between radiotherapy treatment planning based on 80 kV, 140 kV, and energy spectrum image was analyzed. For the image with high iodine concentration, the quality of the energy spectrum-fused image was the highest, followed by that of the 140-kV image. That of the 80-kV image was the worst. The difference in the radiotherapy treatment results among the 3 models was significant. When the concentration of iodine was 30 g/100 mL and the distance from the contrast agent at the dose measurement point was 1 cm, the deviation values (P) were 5.95% and 2.20% when image treatment planning was based on 80 and 140 kV, respectively. When the concentration of iodine was 15 g/100 mL, deviation values (P) were -2.64% and -1.69%. Dual-source computed tomography single-energy spectral imaging technology can remove contrast agent artifacts to improve the calculated dose accuracy in radiotherapy treatment planning. © The Author(s) 2015.

  7. Terahertz imaging with compressed sensing and phase retrieval.

    PubMed

    Chan, Wai Lam; Moravec, Matthew L; Baraniuk, Richard G; Mittleman, Daniel M

    2008-05-01

    We describe a novel, high-speed pulsed terahertz (THz) Fourier imaging system based on compressed sensing (CS), a new signal processing theory, which allows image reconstruction with fewer samples than traditionally required. Using CS, we successfully reconstruct a 64 x 64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels, which defines the image in the Fourier plane, and observe improved reconstruction quality when we apply phase correction. For our chosen image, only about 12% of the pixels are required for reassembling the image. In combination with phase retrieval, our system has the capability to reconstruct images with only a small subset of Fourier amplitude measurements and thus has potential application in THz imaging with cw sources.

  8. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  9. A novel system for commissioning brachytherapy applicators: example of a ring applicator

    NASA Astrophysics Data System (ADS)

    Fonseca, Gabriel P.; Van den Bosch, Michiel R.; Voncken, Robert; Podesta, Mark; Verhaegen, Frank

    2017-11-01

    A novel system was developed to improve commissioning and quality assurance of brachytherapy applicators used in high dose rate (HDR). It employs an imaging panel to create reference images and to measure dwell times and dwell positions. As an example: two ring applicators of the same model were evaluated. An applicator was placed on the surface of an imaging panel and a HDR 192Ir source was positioned in an imaging channel above the panel to generate an image of the applicator, using the gamma photons of the brachytherapy source. The applicator projection image was overlaid with the images acquired by capturing the gamma photons emitted by the source dwelling inside the applicator. We verified 0.1, 0.2, 0.5 and 1.0 cm interdwell distances for different offsets, applicator inclinations and transfer tube curvatures. The data analysis was performed using in-house developed software capable of processing the data in real time, defining catheters and creating movies recording the irradiation procedure. One applicator showed up to 0.3 cm difference from the expected position for a specific dwell position. The problem appeared intermittently. The standard deviations of the remaining dwell positions (40 measurements) were less than 0.05 cm. The second ring applicator had a similar reproducibility with absolute coordinate differences from expected values ranging from  -0.10 up to 0.18 cm. The curvature of the transfer tube can lead to differences larger than 0.1 cm whilst the inclination of the applicator showed a negligible effect. The proposed method allows the verification of all steps of the irradiation, providing accurate information about dwell positions and dwell times. It allows the verification of small interdwell positions (⩽0.1 cm) and reduces measurement time. In addition, no additional radiation source is necessary since the HDR 192Ir source is used to generate an image of the applicator.

  10. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  11. cisTEM, user-friendly software for single-particle image processing.

    PubMed

    Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus

    2018-03-07

    We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.

  12. cisTEM, user-friendly software for single-particle image processing

    PubMed Central

    2018-01-01

    We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216

  13. PP and PS interferometric images of near-seafloor sediments

    USGS Publications Warehouse

    Haines, S.S.

    2011-01-01

    I present interferometric processing examples from an ocean-bottom cable OBC dataset collected at a water depth of 800 m in the Gulf of Mexico. Virtual source and receiver gathers created through cross-correlation of full wavefields show clear PP reflections and PS conversions from near-seafloor layers of interest. Virtual gathers from wavefield-separated data show improved PP and PS arrivals. PP and PS brute stacks from the wavefield-separated data compare favorably with images from a non-interferometric processing flow. ?? 2011 Society of Exploration Geophysicists.

  14. Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.

    2016-03-01

    A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.

  15. The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.

    PubMed

    Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin

    2007-11-01

    This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Macrander, A. T.

    Using the 1-BM-C beamline at the Advanced Photon Source (APS), we have performed the initial indirect x - ray imaging point-spread-function (PSF) test of a unique 88-mm diameter YAG:Ce single crystal of only 100 - micron thickness. The crystal was bonded to a fiber optic plat e (FOP) for mechanical support and to allow the option for FO coupling to a large format camera. This configuration resolution was compared to that of self - supported 25-mm diameter crystals, with and without an Al reflective coating. An upstream monochromator was used to select 17-keV x-rays from the broadband APS bending magnetmore » source of synchrotron radiation. The upstream , adjustable Mo collimators were then used to provide a series of x-ray source transverse sizes from 200 microns down to about 15-20 microns (FWHM) at the crystal surface. The emitted scintillator radiation was in this case lens coupled to the ANDOR Neo sCMOS camera, and the indirect x-ray images were processed offline by a MATLAB - based image processing program. Based on single Gaussian peak fits to the x-ray image projected profiles, we observed a 10.5 micron PSF. This sample thus exhibited superior spatial resolution to standard P43 polycrystalline phosphors of the same thickness which would have about a 100-micron PSF. Lastly, this single crystal resolution combined with the 88-mm diameter makes it a candidate to support future x-ray diffraction or wafer topography experiments.« less

  17. Quantitative assessment of rat corneal thickness and morphology during stem cell therapy by high-speed optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lal, Cerine; McGrath, James; Subhash, Hrebesh; Rani, Sweta; Ritter, Thomas; Leahy, Martin

    2016-03-01

    Optical Coherence Tomography (OCT) is a non-invasive 3 dimensional optical imaging modality that enables high resolution cross sectional imaging in biological tissues and materials. Its high axial and lateral resolution combined with high sensitivity, imaging depth and wide field of view makes it suitable for wide variety of high resolution medical imaging applications at clinically relevant speed. With the advent of swept source lasers, the imaging speed of OCT has increased considerably in recent years. OCT has been used in ophthalmology to study dynamic changes occurring in the cornea and iris, thereby providing physiological and pathological changes that occur within the anterior segment structures such as in glaucoma, during refractive surgery, lamellar keratoplasty and corneal diseases. In this study, we assess the changes in corneal thickness in the anterior segment of the eye during wound healing process in a rat corneal burn model following stem cell therapy using high speed swept source OCT.

  18. Hyperspectral imaging technique for detection of poultry fecal residues on food processing equipments

    NASA Astrophysics Data System (ADS)

    Cho, Byoung-Kwan; Kim, Moon S.; Chen, Yud-Ren

    2005-11-01

    Emerging concerns about safety and security in current mass production of food products necessitate rapid and reliable inspection for contaminant-free products. Diluted fecal residues on poultry processing plant equipment surface, not easily discernable from water by human eye, are contamination sources for poultry carcasses. Development of sensitive detection methods for fecal residues is essential to ensure safe production of poultry carcasses. Hyperspectral imaging techniques have shown good potential for detecting of the presence of fecal and other biological substances on food and processing equipment surfaces. In this study, use of high spatial resolution hyperspectral reflectance and fluorescence imaging (with UV-A excitation) is presented as a tool for selecting a few multispectral bands to detect diluted fecal and ingesta residues on materials used for manufacturing processing equipment. Reflectance and fluorescence imaging methods were compared for potential detection of a range of diluted fecal residues on the surfaces of processing plant equipment. Results showed that low concentrations of poultry feces and ingesta, diluted up to 1:100 by weight with double distilled water, could be detected using hyperspectral fluorescence images with an accuracy of 97.2%. Spectral bands determined in this study could be used for developing a real-time multispectral inspection device for detection of harmful organic residues on processing plant equipment.

  19. Development of ultra-high temperature material characterization capabilities using digital image correlation analysis

    NASA Astrophysics Data System (ADS)

    Cline, Julia Elaine

    2011-12-01

    Ultra-high temperature deformation measurements are required to characterize the thermo-mechanical response of material systems for thermal protection systems for aerospace applications. The use of conventional surface-contacting strain measurement techniques is not practical in elevated temperature conditions. Technological advancements in digital imaging provide impetus to measure full-field displacement and determine strain fields with sub-pixel accuracy by image processing. In this work, an Instron electromechanical axial testing machine with a custom-designed high temperature gripping mechanism is used to apply quasi-static tensile loads to graphite specimens heated to 2000°F (1093°C). Specimen heating via Joule effect is achieved and maintained with a custom-designed temperature control system. Images are captured at monotonically increasing load levels throughout the test duration using an 18 megapixel Canon EOS Rebel T2i digital camera with a modified Schneider Kreutznach telecentric lens and a combination of blue light illumination and narrow band-pass filter system. Images are processed using an open-source Matlab-based digital image correlation (DIC) code. Validation of source code is performed using Mathematica generated images with specified known displacement fields in order to gain confidence in accurate software tracking capabilities. Room temperature results are compared with extensometer readings. Ultra-high temperature strain measurements for graphite are obtained at low load levels, demonstrating the potential for non-contacting digital image correlation techniques to accurately determine full-field strain measurements at ultra-high temperature. Recommendations are given to improve the experimental set-up to achieve displacement field measurements accurate to 1/10 pixel and strain field accuracy of less than 2%.

  20. Effects of photon noise on speckle image reconstruction with the Knox-Thompson algorithm. [in astronomy

    NASA Technical Reports Server (NTRS)

    Nisenson, P.; Papaliolios, C.

    1983-01-01

    An analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented. It is shown that the quantities resulting from the speckle average arre biased, but that the biases are easily estimated and compensated. Calculations are also made of the convergence rate for the speckle average as a function of the source brightness. An illustration of the effects of photon noise on the image recovery process is included.

  1. Motion estimation of subcellular structures from fluorescence microscopy images.

    PubMed

    Vallmitjana, A; Civera-Tregon, A; Hoenicka, J; Palau, F; Benitez, R

    2017-07-01

    We present an automatic image processing framework to study moving intracellular structures from live cell fluorescence microscopy. The system includes the identification of static and dynamic structures from time-lapse images using data clustering as well as the identification of the trajectory of moving objects with a probabilistic tracking algorithm. The method has been successfully applied to study mitochondrial movement in neurons. The approach provides excellent performance under different experimental conditions and is robust to common sources of noise including experimental, molecular and biological fluctuations.

  2. Precision process calibration and CD predictions for low-k1 lithography

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Park, Sangbong; Berger, Gabriel; Coskun, Tamer H.; de Vocht, Joep; Chen, Fung; Yu, Linda; Hsu, Stephen; van den Broeke, Doug; Socha, Robert; Park, Jungchul; Gronlund, Keith; Davis, Todd; Plachecki, Vince; Harris, Tom; Hansen, Steve; Lambson, Chuck

    2005-06-01

    Leading resist calibration for sub-0.3 k1 lithography demands accuracy <2nm for CD through pitch. An accurately calibrated resist process is the prerequisite for establishing production-worthy manufacturing under extreme low k1. From an integrated imaging point of view, the following key components must be simultaneously considered during the calibration - high numerical aperture (NA>0.8) imaging characteristics, customized illuminations (measured vs. modeled pupil profiles), resolution enhancement technology (RET) mask with OPC, reticle metrology, and resist thin film substrate. For imaging at NA approaching unity, polarized illumination can impact significantly the contrast formation in the resist film stack, and therefore it is an important factor to consider in the CD-based resist calibration. For aggressive DRAM memory core designs at k1<0.3, pattern-specific illumination optimization has proven to be critical for achieving the required imaging performance. Various optimization techniques from source profile optimization with fixed mask design to the combined source and mask optimization have been considered for customer designs and available imaging capabilities. For successful low-k1 process development, verification of the optimization results can only be made with a sufficiently tunable resist model that can predicate the wafer printing accurately under various optimized process settings. We have developed, for resist patterning under aggressive low-k1 conditions, a novel 3D diffusion model equipped with double-Gaussian convolution in each dimension. Resist calibration with the new diffusion model has demonstrated a fitness and CD predication accuracy that rival or outperform the traditional 3D physical resist models. In this work, we describe our empirical approach to achieving the nm-scale precision for advanced lithography process calibrations, using either measured 1D CD through-pitch or 2D memory core patterns. We show that for ArF imaging, the current resist development and diffusion modeling can readily achieve ~1-2nm max CD errors for common 1D through-pitch and aggressive 2D memory core resist patterns. Sensitivities of the calibrated models to various process parameters are analyzed, including the comparison between the measured and modeled (Gaussian or GRAIL) pupil profiles. We also report our preliminary calibration results under selected polarized illumination conditions.

  3. Anomalous Stars and Where to Find Them

    NASA Astrophysics Data System (ADS)

    Muna, Demitri; Huff, Eric

    2018-01-01

    The sky is now extensively mapped by imaging surveys in wavelengths that span the electromagnetic spectrum, ranging from Fermi and GALEX down to WISE, Planck, and radio surveys like FIRST and VLSS. Individual public catalogs now contain on order hundreds of millions of distinct sources. Recent progress in image analysis techniques makes possible great increases in the efficiency, sensitivity, and reliability of measurements that combine imaging data from multiple probes with heterogeneous properties. This is especially true for the identification of anomalous sources: traditional methods for finding ‘outliers’ typically rely on making hard cuts on noisy catalog properties, greatly restricting the potential discovery space. Cross-catalog matches confine investigation to objects that occur at signal-to-noise ratios sufficient to be independently detectable in a subset of all the available multi-wavelength coverage. The process of merging the latest analyses with existing data is severely hampered, however, by the fractured way in which these data are processed and stored, limitations of data access, the data volume involved, and the computation power required. This has left archive data far from fully exploited. Stellar anomalies present the best place to start: joint distributions of stellar colors and magnitudes have finer structures than extended sources, and modelling of point sources is computationally cheaper than for galaxies. We present a framework to solve the problem of applying new algorithms to old data while overcoming the limitations described above, in the search for the undiscovered anomalous.

  4. The Utility of the Extended Images in Ambient Seismic Wavefield Migration

    NASA Astrophysics Data System (ADS)

    Girard, A. J.; Shragge, J. C.

    2015-12-01

    Active-source 3D seismic migration and migration velocity analysis (MVA) are robust and highly used methods for imaging Earth structure. One class of migration methods uses extended images constructed by incorporating spatial and/or temporal wavefield correlation lags to the imaging conditions. These extended images allow users to directly assess whether images focus better with different parameters, which leads to MVA techniques that are based on the tenets of adjoint-state theory. Under certain conditions (e.g., geographical, cultural or financial), however, active-source methods can prove impractical. Utilizing ambient seismic energy that naturally propagates through the Earth is an alternate method currently used in the scientific community. Thus, an open question is whether extended images are similarly useful for ambient seismic migration processing and verifying subsurface velocity models, and whether one can similarly apply adjoint-state methods to perform ambient migration velocity analysis (AMVA). Herein, we conduct a number of numerical experiments that construct extended images from ambient seismic recordings. We demonstrate that, similar to active-source methods, there is a sensitivity to velocity in ambient seismic recordings in the migrated extended image domain. In synthetic ambient imaging tests with varying degrees of error introduced to the velocity model, the extended images are sensitive to velocity model errors. To determine the extent of this sensitivity, we utilize acoustic wave-equation propagation and cross-correlation-based migration methods to image weak body-wave signals present in the recordings. Importantly, we have also observed scenarios where non-zero correlation lags show signal while zero-lags show none. This may be a valuable missing piece for ambient migration techniques that have yielded largely inconclusive results, and might be an important piece of information for performing AMVA from ambient seismic recordings.

  5. The ROSAT Deep Survey. 2; Optical Identification, Photometry and Spectra of X-Ray Sources in the Lockman Field

    NASA Technical Reports Server (NTRS)

    Schmidt, M.; Hasinger, G.; Gunn, J.; Schneider, D.; Burg, R.; Giacconi, R.; Lehmann, I.; MacKenty, J.; Truemper, J.; Zamorani, G.

    1998-01-01

    The ROSAT Deep Survey includes a complete sample of 50 X-ray sources with fluxes in the 0.5 - 2 keV band larger than 5.5 x 10(exp -15)erg/sq cm/s in the Lockman field (Hasinger et al., Paper 1). We have obtained deep broad-band CCD images of the field and spectra of many optical objects near the positions of the X-ray sources. We define systematically the process leading to the optical identifications of the X-ray sources. For this purpose, we introduce five identification (ID) classes that characterize the process in each case. Among the 50 X-ray sources, we identify 39 AGNs, 3 groups of galaxies, 1 galaxy and 3 galactic stars. Four X-ray sources remain unidentified so far; two of these objects may have an unusually large ratio of X-ray to optical flux.

  6. Engineering of Nanoscale Contrast Agents for Optical Coherence Tomography.

    PubMed

    Gordon, Andrew Y; Jayagopal, Ashwath

    2014-01-30

    Optical coherence tomography has emerged as valuable imaging modalityin ophthalmology and other fields by enabling high-resolution three-dimensional imaging of tissue. In this paper, we review recent progress in the field of contrast-enhanced optical coherence tomography (OCT). We discuss exogenous and endogenous sources of OCT contrast, focusing on their use with standard OCT systems as well as emerging OCT-based imaging modalities. We include advances in the processing of OCT data that generate improved tissue contrast, including spectroscopic OCT (SOCT), as well as work utilizing secondary light sources and/or detection mechanisms to create and detect enhanced contrast, including photothermal OCT (PTOCT) and photoacoustic OCT (PAOCT). Finally, we conclude with a discussion of the translational potential of these developments as well as barriers to their clinical use.

  7. Workflows and the Role of Images for Virtual 3d Reconstruction of no Longer Extant Historic Objects

    NASA Astrophysics Data System (ADS)

    Münster, S.

    2013-07-01

    3D reconstruction technologies have gained importance as tools for the research and visualization of no longer extant historic objects during the last decade. Within such reconstruction processes, visual media assumes several important roles: as the most important sources especially for a reconstruction of no longer extant objects, as a tool for communication and cooperation within the production process, as well as for a communication and visualization of results. While there are many discourses about theoretical issues of depiction as sources and as visualization outcomes of such projects, there is no systematic research about the importance of depiction during a 3D reconstruction process and based on empirical findings. Moreover, from a methodological perspective, it would be necessary to understand which role visual media plays during the production process and how it is affected by disciplinary boundaries and challenges specific to historic topics. Research includes an analysis of published work and case studies investigating reconstruction projects. This study uses methods taken from social sciences to gain a grounded view of how production processes would take place in practice and which functions and roles images would play within them. For the investigation of these topics, a content analysis of 452 conference proceedings and journal articles related to 3D reconstruction modeling in the field of humanities has been completed. Most of the projects described in those publications dealt with data acquisition and model building for existing objects. Only a small number of projects focused on structures that no longer or never existed physically. Especially that type of project seems to be interesting for a study of the importance of pictures as sources and as tools for interdisciplinary cooperation during the production process. In the course of the examination the authors of this paper applied a qualitative content analysis for a sample of 26 previously published project reports to depict strategies and types and three case studies of 3D reconstruction projects to evaluate evolutionary processes during such projects. The research showed that reconstructions of no longer existing historic structures are most commonly used for presentation or research purposes of large buildings or city models. Additionally, they are often realized by interdisciplinary workgroups using images as the most important source for reconstruction as far as important media for communication and quality control during the reconstruction process.

  8. The ultraviolet view of the Magellanic Clouds from GALEX: A first look at the LMC source catalog

    NASA Astrophysics Data System (ADS)

    Simons, Raymond; Thilker, David; Bianchi, Luciana; Wyder, Ted

    2014-03-01

    The Galaxy Evolution Exporer (GALEX) has performed unprecedented imaging surveys of the Magellanic Clouds (MC) and their surrounding areas including the Magellanic Bridge (MB) in near-UV (NUV, 1771-2831 Å) and far-UV (FUV, 1344-1786 Å) bands at 5″ resolution. Substantially more area was covered in the NUV than FUV, particularly in the bright central regions, because of the GALEX FUV detector failure. The 5σ depth of the NUV imaging varies between 20.8 and 22.7 (ABmag). Such imaging provides the first sensitive view of the entire content of hot stars in the Magellanic System, revealing the presence of young populations even in sites with extremely low star-formation rate surface density like the MB, owing to high sensitivity of the UV data to hot stars and the dark sky at these wavelengths. The density of UV sources is quite high in many areas of the LMC and SMC. Crowding limits the quality of source detection and photometry from the standard mission pipeline processing. We performed custom-photometry of the GALEX data in the MC survey region (<15° from the LMC, <10° from the SMC). After merging multiple detections of sources in overlapping images, the resulting catalog we have produced for the LMC contains nearly six million unique NUV point sources within 15° and is briefly presented herein. This paper provides a first look at the GALEX MC survey and highlights some of the science investigations that the entire catalog and imaging dataset will make possible.

  9. Informed Source Separation of Atmospheric and Surface Signal Contributions in Shortwave Hyperspectral Imagery using Non-negative Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Wright, L.; Coddington, O.; Pilewskie, P.

    2015-12-01

    Current challenges in Earth remote sensing require improved instrument spectral resolution, spectral coverage, and radiometric accuracy. Hyperspectral instruments, deployed on both aircraft and spacecraft, are a growing class of Earth observing sensors designed to meet these challenges. They collect large amounts of spectral data, allowing thorough characterization of both atmospheric and surface properties. The higher accuracy and increased spectral and spatial resolutions of new imagers require new numerical approaches for processing imagery and separating surface and atmospheric signals. One potential approach is source separation, which allows us to determine the underlying physical causes of observed changes. Improved signal separation will allow hyperspectral instruments to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. In this work, we investigate a Non-negative Matrix Factorization (NMF) method for the separation of atmospheric and land surface signal sources. NMF offers marked benefits over other commonly employed techniques, including non-negativity, which avoids physically impossible results, and adaptability, which allows the method to be tailored to hyperspectral source separation. We adapt our NMF algorithm to distinguish between contributions from different physically distinct sources by introducing constraints on spectral and spatial variability and by using library spectra to inform separation. We evaluate our NMF algorithm with simulated hyperspectral images as well as hyperspectral imagery from several instruments including, the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), NASA Hyperspectral Imager for the Coastal Ocean (HICO) and National Ecological Observatory Network (NEON) Imaging Spectrometer.

  10. Extended axial imaging range, widefield swept source optical coherence tomography angiography.

    PubMed

    Liu, Gangjun; Yang, Jianlong; Wang, Jie; Li, Yan; Zang, Pengxiao; Jia, Yali; Huang, David

    2017-11-01

    We developed a high-speed, swept source OCT system for widefield OCT angiography (OCTA) imaging. The system has an extended axial imaging range of 6.6 mm. An electrical lens is used for fast, automatic focusing. The recently developed split-spectrum amplitude and phase-gradient angiography allow high-resolution OCTA imaging with only two B-scan repetitions. An improved post-processing algorithm effectively removed trigger jitter artifacts and reduced noise in the flow signal. We demonstrated high contrast 3 mm×3 mm OCTA image with 400×400 pixels acquired in 3 seconds and high-definition 8 mm×6 mm and 12 mm×6 mm OCTA images with 850×400 pixels obtained in 4 seconds. A widefield 8 mm×11 mm OCTA image is produced by montaging two 8 mm×6 mm scans. An ultra-widefield (with a maximum of 22 mm along both vertical and horizontal directions) capillary-resolution OCTA image is obtained by montaging six 12 mm×6 mm scans. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Thermal image analysis using the serpentine method

    NASA Astrophysics Data System (ADS)

    Koprowski, Robert; Wilczyński, Sławomir

    2018-03-01

    Thermal imaging is an increasingly widespread alternative to other imaging methods. As a supplementary method in diagnostics, it can be used both statically and with dynamic temperature changes. The paper proposes a new image analysis method that allows for the acquisition of new diagnostic information as well as object segmentation. The proposed serpentine analysis uses known and new methods of image analysis and processing proposed by the authors. Affine transformations of an image and subsequent Fourier analysis provide a new diagnostic quality. The method is fully repeatable and automatic and independent of inter-individual variability in patients. The segmentation results are by 10% better than those obtained from the watershed method and the hybrid segmentation method based on the Canny detector. The first and second harmonics of serpentine analysis enable to determine the type of temperature changes in the region of interest (gradient, number of heat sources etc.). The presented serpentine method provides new quantitative information on thermal imaging and more. Since it allows for image segmentation and designation of contact points of two and more heat sources (local minimum), it can be used to support medical diagnostics in many areas of medicine.

  12. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  13. The Medical Imaging Interaction Toolkit: challenges and advances : 10 years of open-source development.

    PubMed

    Nolden, Marco; Zelzer, Sascha; Seitel, Alexander; Wald, Diana; Müller, Michael; Franz, Alfred M; Maleike, Daniel; Fangerau, Markus; Baumhauer, Matthias; Maier-Hein, Lena; Maier-Hein, Klaus H; Meinzer, Hans-Peter; Wolf, Ivo

    2013-07-01

    The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research.

  14. Sensor-based architecture for medical imaging workflow analysis.

    PubMed

    Silva, Luís A Bastião; Campos, Samuel; Costa, Carlos; Oliveira, José Luis

    2014-08-01

    The growing use of computer systems in medical institutions has been generating a tremendous quantity of data. While these data have a critical role in assisting physicians in the clinical practice, the information that can be extracted goes far beyond this utilization. This article proposes a platform capable of assembling multiple data sources within a medical imaging laboratory, through a network of intelligent sensors. The proposed integration framework follows a SOA hybrid architecture based on an information sensor network, capable of collecting information from several sources in medical imaging laboratories. Currently, the system supports three types of sensors: DICOM repository meta-data, network workflows and examination reports. Each sensor is responsible for converting unstructured information from data sources into a common format that will then be semantically indexed in the framework engine. The platform was deployed in the Cardiology department of a central hospital, allowing identification of processes' characteristics and users' behaviours that were unknown before the utilization of this solution.

  15. Instant Grainification: Real-Time Grain-Size Analysis from Digital Images in the Field

    NASA Astrophysics Data System (ADS)

    Rubin, D. M.; Chezar, H.

    2007-12-01

    Over the past few years, digital cameras and underwater microscopes have been developed to collect in-situ images of sand-sized bed sediment, and software has been developed to measure grain size from those digital images (Chezar and Rubin, 2004; Rubin, 2004; Rubin et al., 2006). Until now, all image processing and grain- size analysis was done back in the office where images were uploaded from cameras and processed on desktop computers. Computer hardware has become small and rugged enough to process images in the field, which for the first time allows real-time grain-size analysis of sand-sized bed sediment. We present such a system consisting of weatherproof tablet computer, open source image-processing software (autocorrelation code of Rubin, 2004, running under Octave and Cygwin), and digital camera with macro lens. Chezar, H., and Rubin, D., 2004, Underwater microscope system: U.S. Patent and Trademark Office, patent number 6,680,795, January 20, 2004. Rubin, D.M., 2004, A simple autocorrelation algorithm for determining grain size from digital images of sediment: Journal of Sedimentary Research, v. 74, p. 160-165. Rubin, D.M., Chezar, H., Harney, J.N., Topping, D.J., Melis, T.S., and Sherwood, C.R., 2006, Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size: USGS Open-File Report 2006-1360.

  16. Comparison of eye imaging pattern recognition using neural network

    NASA Astrophysics Data System (ADS)

    Bukhari, W. M.; Syed A., M.; Nasir, M. N. M.; Sulaima, M. F.; Yahaya, M. S.

    2015-05-01

    The beauty of eye recognition system that it is used in automatic identifying and verifies a human weather from digital images or video source. There are various behaviors of the eye such as the color of the iris, size of pupil and shape of the eye. This study represents the analysis, design and implementation of a system for recognition of eye imaging. All the eye images that had been captured from the webcam in RGB format must through several techniques before it can be input for the pattern and recognition processes. The result shows that the final value of weight and bias after complete training 6 eye images for one subject is memorized by the neural network system and be the reference value of the weight and bias for the testing part. The target classifies to 5 different types for 5 subjects. The eye images can recognize the subject based on the target that had been set earlier during the training process. When the values between new eye image and the eye image in the database are almost equal, it is considered the eye image is matched.

  17. IOTA: integration optimization, triage and analysis tool for the processing of XFEL diffraction images.

    PubMed

    Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T

    2016-06-01

    Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.

  18. FibrilJ: ImageJ plugin for fibrils' diameter and persistence length determination

    NASA Astrophysics Data System (ADS)

    Sokolov, P. A.; Belousov, M. V.; Bondarev, S. A.; Zhouravleva, G. A.; Kasyanenko, N. A.

    2017-05-01

    Application of microscopy to evaluate the morphology and size of filamentous proteins and amyloids requires new and creative approaches to simplify and automate the image processing. The estimation of mean values of fibrils diameter, length and bending stiffness on micrographs is a major challenge. For this purpose we developed an open-source FibrilJ plugin for the ImageJ/FiJi program. It automatically recognizes the fibrils on the surface of a mica, silicon, gold or formvar film and further analyzes them to calculate the distribution of fibrils by diameters, lengths and persistence lengths. The plugin has been validated by the processing of TEM images of fibrils formed by Sup35NM yeast protein and artificially created images of rod-shape objects with predefined parameters. Novel data obtained by SEM for Sup35NM protein fibrils immobilized on silicon and gold substrates are also presented and analyzed.

  19. Flow measurements in sewers based on image analysis: automatic flow velocity algorithm.

    PubMed

    Jeanbourquin, D; Sage, D; Nguyen, L; Schaeli, B; Kayal, S; Barry, D A; Rossi, L

    2011-01-01

    Discharges of combined sewer overflows (CSOs) and stormwater are recognized as an important source of environmental contamination. However, the harsh sewer environment and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. An in situ system for sewer water flow monitoring based on video images was evaluated. Algorithms to determine water velocities were developed based on image-processing techniques. The image-based water velocity algorithm identifies surface features and measures their positions with respect to real world coordinates. A web-based user interface and a three-tier system architecture enable remote configuration of the cameras and the image-processing algorithms in order to calculate automatically flow velocity on-line. Results of investigations conducted in a CSO are presented. The system was found to measure reliably water velocities, thereby providing the means to understand particular hydraulic behaviors.

  20. Development of an optical inspection platform for surface defect detection in touch panel glass

    NASA Astrophysics Data System (ADS)

    Chang, Ming; Chen, Bo-Cheng; Gabayno, Jacque Lynn; Chen, Ming-Fu

    2016-04-01

    An optical inspection platform combining parallel image processing with high resolution opto-mechanical module was developed for defect inspection of touch panel glass. Dark field images were acquired using a 12288-pixel line CCD camera with 3.5 µm per pixel resolution and 12 kHz line rate. Key features of the glass surface were analyzed by parallel image processing on combined CPU and GPU platforms. Defect inspection of touch panel glass, which provided 386 megapixel image data per sample, was completed in roughly 5 seconds. High detection rate of surface scratches on the touch panel glass was realized with minimum defects size of about 10 µm after inspection. The implementation of a custom illumination source significantly improved the scattering efficiency on the surface, therefore enhancing the contrast in the acquired images and overall performance of the inspection system.

  1. Hierarchical storage of large volume of multidector CT data using distributed servers

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  2. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research.

    PubMed

    Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.

  3. Singular value decomposition metrics show limitations of detector design in diffuse fluorescence tomography

    PubMed Central

    Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.

    2010-01-01

    The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566

  4. Smart concrete slabs with embedded tubular PZT transducers for damage detection

    NASA Astrophysics Data System (ADS)

    Gao, Weihang; Huo, Linsheng; Li, Hongnan; Song, Gangbing

    2018-02-01

    The objective of this study is to develop a new concept and methodology of smart concrete slab (SCS) with embedded tubular lead zirconate titanate transducer array for image based damage detection. Stress waves, as the detecting signals, are generated by the embedded tubular piezoceramic transducers in the SCS. Tubular piezoceramic transducers are used due to their capacity of generating radially uniform stress waves in a two-dimensional concrete slab (such as bridge decks and walls), increasing the monitoring range. A circular type delay-and-sum (DAS) imaging algorithm is developed to image the active acoustic sources based on the direct response received by each sensor. After the scattering signals from the damage are obtained by subtracting the baseline response of the concrete structures from those of the defective ones, the elliptical type DAS imaging algorithm is employed to process the scattering signals and reconstruct the image of the damage. Finally, two experiments, including active acoustic source monitoring and damage imaging for concrete structures, are carried out to illustrate and demonstrate the effectiveness of the proposed method.

  5. Transforming Our Understanding of the X-ray Universe: The Imaging X-ray Polarimeter Explorer (IXPE)

    NASA Technical Reports Server (NTRS)

    Weisskopf, Martin C.; Bellazzini, Ronaldo; Costa, Enrico; Matt, Giorgio; Marshall, Herman; ODell, Stephen L.; Pavlov, George; Ramsey, Brian; Romani, Roger

    2014-01-01

    Accurate X-ray polarimetry can provide unique information on high-energy-astrophysical processes and sources. As there have been no meaningful X-ray polarization measurements of cosmic sources since our pioneering work in the 1970's, the time is ripe to explore this new parameter space in X-ray astronomy. To accomplish this requires a well-calibrated and well understood system that-particularly for an Explorer mission-has technical, cost, and schedule credibility. The system that we shall present satisfies these conditions, being based upon completely calibrated imaging- and polarization-sensitive detectors and proven X-ray-telescope technology.

  6. Compensation of spectral artifacts in dual-modality intravascular optical coherence tomography and near-infrared spectroscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Fard, Ali M.; Gardecki, Joseph A.; Ughi, Giovanni J.; Hyun, Chulho; Tearney, Guillermo J.

    2016-02-01

    Intravascular optical coherence tomography (OCT) is a high-resolution catheter-based imaging method that provides three-dimensional microscopic images of coronary artery in vivo, facilitating coronary artery disease treatment decisions based on detailed morphology. Near-infrared spectroscopy (NIRS) has proven to be a powerful tool for identification of lipid-rich plaques inside the coronary walls. We have recently demonstrated a dual-modality intravascular imaging technology that integrates OCT and NIRS into one imaging catheter using a two-fiber arrangement and a custom-made dual-channel fiber rotary junction. It therefore enables simultaneous acquisition of microstructural and composition information at 100 frames/second for improved diagnosis of coronary lesions. The dual-modality OCT-NIRS system employs a single wavelength-swept light source for both OCT and NIRS modalities. It subsequently uses a high-speed photoreceiver to detect the NIRS spectrum in the time domain. Although use of one light source greatly simplifies the system configuration, such light source exhibits pulse-to-pulse wavelength and intensity variation due to mechanical scanning of the wavelength. This can be in particular problematic for NIRS modality and sacrifices the reliability of the acquired spectra. In order to address this challenge, here we developed a robust data acquisition and processing method that compensates for the spectral variations of the wavelength-swept light source. The proposed method extracts the properties of the light source, i.e., variation period and amplitude from a reference spectrum and subsequently calibrates the NIRS datasets. We have applied this method on datasets obtained from cadaver human coronary arteries using a polygon-scanning (1230-1350nm) OCT system, operating at 100,000 sweeps per second. The results suggest that our algorithm accurately and robustly compensates the spectral variations and visualizes the dual-modality OCT-NIRS images. These findings are therefore crucial for the practical application and clinical translation of dual-modality intravascular OCT-NIRS imaging when the same swept sources are used for both OCT and spectroscopy.

  7. Landsat-8 Operational Land Imager On-Orbit Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian L.; Barsi, Julia A.

    2017-01-01

    The Operational Land Imager (OLI), the VIS/NIR/SWIR sensor on the Landsat-8 has been successfully acquiring Earth Imagery for more than four years. The OLI incorporates two on-board radiometric calibration systems, one diffuser based and one lamp based, each with multiple sources. For each system one source is treated as primary and used frequently and the other source(s) are used less frequently to assist in tracking any degradation in the primary sources. In addition, via a spacecraft maneuver, the OLI instrument views the moon once a lunar cycle (approx. 29 days). The integrated lunar irradiances from these acquisitions are compared to the output of a lunar irradiance model. The results from all these techniques, combined with cross calibrations with other sensors and ground based vicarious measurements are used to monitor the OLI's stability and correct for any changes observed. To date, the various techniques have other detected significant changes in the shortest wavelength OLI band centered at 443 nm and these are currently being adjusted in the operational processing.

  8. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    PubMed

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  9. Simultaneous in vivo imaging of melanin and lipofuscin in the retina with photoacoustic ophthalmoscopy and autofluorescence imaging.

    PubMed

    Zhang, Xiangyang; Zhang, Hao F; Puliafito, Carmen A; Jiao, Shuliang

    2011-08-01

    We combined photoacoustic ophthalmoscopy (PAOM) with autofluorescence imaging for simultaneous in vivo imaging of dual molecular contrasts in the retina using a single light source. The dual molecular contrasts come from melanin and lipofuscin in the retinal pigment epithelium (RPE). Melanin and lipofuscin are two types of pigments and are believed to play opposite roles (protective versus exacerbate) in the RPE in the aging process. We have successfully imaged the retina of pigmented and albino rats at different ages. The experimental results showed that multimodal PAOM system can be a potentially powerful tool in the study of age-related degenerative retinal diseases.

  10. Simultaneous in vivo imaging of melanin and lipofuscin in the retina with photoacoustic ophthalmoscopy and autofluorescence imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangyang; Zhang, Hao F.; Puliafito, Carmen A.; Jiao, Shuliang

    2011-08-01

    We combined photoacoustic ophthalmoscopy (PAOM) with autofluorescence imaging for simultaneous in vivo imaging of dual molecular contrasts in the retina using a single light source. The dual molecular contrasts come from melanin and lipofuscin in the retinal pigment epithelium (RPE). Melanin and lipofuscin are two types of pigments and are believed to play opposite roles (protective versus exacerbate) in the RPE in the aging process. We have successfully imaged the retina of pigmented and albino rats at different ages. The experimental results showed that multimodal PAOM system can be a potentially powerful tool in the study of age-related degenerative retinal diseases.

  11. The Mohorovičić discontinuity beneath the continental crust: An overview of seismic constraints

    NASA Astrophysics Data System (ADS)

    Carbonell, Ramon; Levander, Alan; Kind, Rainer

    2013-12-01

    The seismic signature of the Moho from which geologic and tectonic evolution hypotheses are derived is to a large degree a result of the seismic methodology which has been used to obtain the image. Seismic data of different types, passive source (earthquake) broad-band recordings, and controlled source seismic refraction, densely recorded wide-angle deep seismic reflection, and normal incidence reflection (using VibroseisTM, explosives, or airguns), have contributed to the description of the Moho as a relatively complex transition zone. Of critical importance for the quality and resolution of the seismic image are the acquisition parameters, used in the imaging experiments. A variety of signatures have been obtained for the Moho at different scales generally dependent upon bandwidth of the seismic source. This variety prevents the development of a single universally applicable interpretation. In this way source frequency content, and source and sensor spacing determine the vertical and lateral resolution of the images, respectively. In most cases the different seismic probes provide complementary data that gives a fuller picture of the physical structure of the Moho, and its relationship to a petrologic crust-mantle transition. In regional seismic studies carried out using passive source recordings the Moho is a relatively well defined structure with marked lateral continuity. The characteristics of this boundary change depending on the geology and tectonic evolution of the targeted area. Refraction and wide-angle studies suggest the Moho to be often a relatively sharp velocity contrast, whereas the Moho in coincident high quality seismic reflection images is often seen as the abrupt downward decrease in seismic reflectivity. The origin of the Moho and its relation to the crust-mantle boundary is probably better constrained by careful analysis of its internal details, which can be complex and geographically varied. Unlike the oceanic Moho which is formed in a relatively simple, well understood process, the continental Moho can be subject to an extensive variety of tectonic processes, making overarching conclusions about the continental Moho difficult. Speaking very broadly: 1) In orogenic belts still undergoing compression and active continental volcanic arcs, the Moho evolves with the mountain belt, 2) In collapsed Phanerozoic orogenic belts the Moho under the collapse structure was formed during the collapse, often by a combination of processes. 3) In regions having experienced widespread basaltic volcanism, the Moho can result from underplated basalt and basaltic residuum. In Precambrian terranes the Moho may be as ancient as the formation of the crust, in others Precambrian tectonic and magmatic processes have reset it. We note that seismic reflection data in Phanerosoic orogens as well as from Precambrian cratonic terranes often show thrust type structures extending as deep as the Moho, and suggest that even where crust and mantle xenoliths provide similar age of formation dates, the crust may be semi-allochothonous.

  12. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  13. Guided wave imaging of oblique reflecting interfaces in pipes using common-source synthetic focusing

    NASA Astrophysics Data System (ADS)

    Sun, Zeqing; Sun, Anyu; Ju, Bing-Feng

    2018-04-01

    Cross-mode-family mode conversion and secondary reflection of guided waves in pipes complicate the processing of guided waves signals, and can cause false detection. In this paper, filters operating in the spectral domain of wavenumber, circumferential order and frequency are designed to suppress the signal components of unwanted mode-family and unwanted traveling direction. Common-source synthetic focusing is used to reconstruct defect images from the guided wave signals. Simulations of the reflections from linear oblique defects and a semicircle defect are separately implemented. Defect images, which are reconstructed from the simulation results under different excitation conditions, are comparatively studied in terms of axial resolution, reflection amplitude, detectable oblique angle and so on. Further, the proposed method is experimentally validated by detecting linear cracks with various oblique angles (10-40°). The proposed method relies on the guided wave signals that are captured during 2-D scanning of a cylindrical area on the pipe. The redundancy of the signals is analyzed to reduce the time-consumption of the scanning process and to enhance the practicability of the proposed method.

  14. How to COAAD Images. II. A Coaddition Image that is Optimal for Any Purpose in the Background-dominated Noise Limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zackay, Barak; Ofek, Eran O.

    Image coaddition is one of the most basic operations that astronomers perform. In Paper I, we presented the optimal ways to coadd images in order to detect faint sources and to perform flux measurements under the assumption that the noise is approximately Gaussian. Here, we build on these results and derive from first principles a coaddition technique that is optimal for any hypothesis testing and measurement (e.g., source detection, flux or shape measurements, and star/galaxy separation), in the background-noise-dominated case. This method has several important properties. The pixels of the resulting coadded image are uncorrelated. This image preserves all themore » information (from the original individual images) on all spatial frequencies. Any hypothesis testing or measurement that can be done on all the individual images simultaneously, can be done on the coadded image without any loss of information. The PSF of this image is typically as narrow, or narrower than the PSF of the best image in the ensemble. Moreover, this image is practically indistinguishable from a regular single image, meaning that any code that measures any property on a regular astronomical image can be applied to it unchanged. In particular, the optimal source detection statistic derived in Paper I is reproduced by matched filtering this image with its own PSF. This coaddition process, which we call proper coaddition, can be understood as the maximum signal-to-noise ratio measurement of the Fourier transform of the image, weighted in such a way that the noise in the entire Fourier domain is of equal variance. This method has important implications for multi-epoch seeing-limited deep surveys, weak lensing galaxy shape measurements, and diffraction-limited imaging via speckle observations. The last topic will be covered in depth in future papers. We provide an implementation of this algorithm in MATLAB.« less

  15. More flexibility in representing geometric distortion in astronomical images

    NASA Astrophysics Data System (ADS)

    Shupe, David L.; Laher, Russ R.; Storrie-Lombardi, Lisa; Surace, Jason; Grillmair, Carl; Levitan, David; Sesar, Branimir

    2012-09-01

    A number of popular software tools in the public domain are used by astronomers, professional and amateur alike, but some of the tools that have similar purposes cannot be easily interchanged, owing to the lack of a common standard. For the case of image distortion, SCAMP and SExtractor, available from Astromatic.net, perform astrometric calibration and source-object extraction on image data, and image-data geometric distortion is computed in celestial coordinates with polynomial coefficients stored in the FITS header with the PV i_j keywords. Another widely-used astrometric-calibration service, Astrometry.net, solves for distortion in pixel coordinates using the SIP convention that was introduced by the Spitzer Science Center. Up until now, due to the complexity of these distortion representations, it was very difficult to use the output of one of these packages as input to the other. New Python software, along with faster-computing C-language translations, have been developed at the Infrared Processing and Analysis Center (IPAC) to convert FITS-image headers from PV to SIP and vice versa. It is now possible to straightforwardly use Astrometry.net for astrometric calibration and then SExtractor for source-object extraction. The new software also enables astrometric calibration by SCAMP followed by image visualization with tools that support SIP distortion, but not PV . The software has been incorporated into the image-processing pipelines of the Palomar Transient Factory (PTF), which generate FITS images with headers containing both distortion representations. The software permits the conversion of archived images, such as from the Spitzer Heritage Archive and NASA/IPAC Infrared Science Archive, from SIP to PV or vice versa. This new capability renders unnecessary any new representation, such as the proposed TPV distortion convention.

  16. Coherent diffraction surface imaging in reflection geometry.

    PubMed

    Marathe, Shashidhara; Kim, S S; Kim, S N; Kim, Chan; Kang, H C; Nickles, P V; Noh, D Y

    2010-03-29

    We present a reflection based coherent diffraction imaging method which can be used to reconstruct a non periodic surface image from a diffraction amplitude measured in reflection geometry. Using a He-Ne laser, we demonstrated that a surface image can be reconstructed solely from the reflected intensity from a surface without relying on any prior knowledge of the sample object or the object support. The reconstructed phase image of the exit wave is particularly interesting since it can be used to obtain quantitative information of the surface depth profile or the phase change during the reflection process. We believe that this work will broaden the application areas of coherent diffraction imaging techniques using light sources with limited penetration depth.

  17. Fast high-energy X-ray imaging for Severe Accidents experiments on the future PLINIUS-2 platform

    NASA Astrophysics Data System (ADS)

    Berge, L.; Estre, N.; Tisseur, D.; Payan, E.; Eck, D.; Bouyer, V.; Cassiaut-Louis, N.; Journeau, C.; Tellier, R. Le; Pluyette, E.

    2018-01-01

    The future PLINIUS-2 platform of CEA Cadarache will be dedicated to the study of corium interactions in severe nuclear accidents, and will host innovative large-scale experiments. The Nuclear Measurement Laboratory of CEA Cadarache is in charge of real-time high-energy X-ray imaging set-ups, for the study of the corium-water and corium-sodium interaction, and of the corium stratification process. Imaging such large and high-density objects requires a 15 MeV linear electron accelerator coupled to a tungsten target creating a high-energy Bremsstrahlung X-ray flux, with corresponding dose rate about 100 Gy/min at 1 m. The signal is detected by phosphor screens coupled to high-framerate scientific CMOS cameras. The imaging set-up is established using an experimentally-validated home-made simulation software (MODHERATO). The code computes quantitative radiographic signals from the description of the source, object geometry and composition, detector, and geometrical configuration (magnification factor, etc.). It accounts for several noise sources (photonic and electronic noises, swank and readout noise), and for image blur due to the source spot-size and to the detector unsharpness. In a view to PLINIUS-2, the simulation has been improved to account for the scattered flux, which is expected to be significant. The paper presents the scattered flux calculation using the MCNP transport code, and its integration into the MODHERATO simulation. Then the validation of the improved simulation is presented, through confrontation to real measurement images taken on a small-scale equivalent set-up on the PLINIUS platform. Excellent agreement is achieved. This improved simulation is therefore being used to design the PLINIUS-2 imaging set-ups (source, detectors, cameras, etc.).

  18. Introduction to Remote Sensing Image Registration

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline

    2017-01-01

    For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications

  19. First Science Verification of the VLA Sky Survey Pilot

    NASA Astrophysics Data System (ADS)

    Cavanaugh, Amy

    2017-01-01

    My research involved analyzing test images by Steve Myers for the upcoming VLA Sky Survey. This survey will cover the entire sky visible from the VLA site in S band (2-4 GHz). The VLA will be in B configuration for the survey, as it was when the test images were produced, meaning a resolution of approximately 2.5 arcseconds. Conducted using On-the-Fly mode, the survey will have a speed of approximately 20 deg2 hr-1 (including overhead). New Python imaging scripts are being developed and improved to process the VLASS images. My research consisted of comparing a continuum test image over S band (from the new imaging scripts) to two previous images of the same region of the sky (from the CNSS and FIRST surveys), as well as comparing the continuum image to single spectral windows (from the new imaging scripts and of the same sky region). By comparing our continuum test image to images from CNSS and FIRST, we tested on-the-Fly mode and the imaging script used to produce our images. Another goal was to test whether individual spectral windows could be used in combination to calculate spectral indices close to those produced over S band (based only on our continuum image). Our continuum image contained 64 sources as opposed to the 99 sources found in the CNSS image. The CNSS image also had lower noise level (0.095 mJy/beam compared to 0.119 mJy/beam). Additionally, when our continuum image was compared to the CNSS image, separation showed no dependence on total flux density (in our continuum image). At lower flux densities, sources in our image were brighter than the same ones in the CNSS image. When our continuum image was compared to the FIRST catalog, the spectral index difference showed no dependence on total flux (in our continuum image). In conclusion, the quality of our images did not completely match the quality of the CNSS and FIRST images. More work is needed in developing the new imaging scripts.

  20. Spatially resolved imaging of opto-electrical property variations

    DOEpatents

    Nikiforov, Maxim; Darling, Seth B; Suzer, Ozgun; Guest, Jeffrey; Roelofs, Andreas

    2014-09-16

    Systems and methods for opto electric properties are provided. A light source illuminates a sample. A reference detector senses light from the light source. A sample detector receives light from the sample. A positioning fixture allows for relative positioning of the sample or the light source with respect to each other. An electrical signal device measures the electrical properties of the sample. The reference detector, sample detector and electrical signal device provide information that may be processed to determine opto-electric properties of the same.

  1. Design of FPGA ICA for hyperspectral imaging processing

    NASA Astrophysics Data System (ADS)

    Nordin, Anis; Hsu, Charles C.; Szu, Harold H.

    2001-03-01

    The remote sensing problem which uses hyperspectral imaging can be transformed into a blind source separation problem. Using this model, hyperspectral imagery can be de-mixed into sub-pixel spectra which indicate the different material present in the pixel. This can be further used to deduce areas which contain forest, water or biomass, without even knowing the sources which constitute the image. This form of remote sensing allows previously blurred images to show the specific terrain involved in that region. The blind source separation problem can be implemented using an Independent Component Analysis algorithm. The ICA Algorithm has previously been successfully implemented using software packages such as MATLAB, which has a downloadable version of FastICA. The challenge now lies in implementing it in a form of hardware, or firmware in order to improve its computational speed. Hardware implementation also solves insufficient memory problem encountered by software packages like MATLAB when employing ICA for high resolution images and a large number of channels. Here, a pipelined solution of the firmware, realized using FPGAs are drawn out and simulated using C. Since C code can be translated into HDLs or be used directly on the FPGAs, it can be used to simulate its actual implementation in hardware. The simulated results of the program is presented here, where seven channels are used to model the 200 different channels involved in hyperspectral imaging.

  2. Refractive Optics for Hard X-ray Transmission Microscopy

    NASA Astrophysics Data System (ADS)

    Simon, M.; Ahrens, G.; Last, A.; Mohr, J.; Nazmov, V.; Reznikova, E.; Voigt, A.

    2011-09-01

    For hard x-ray transmission microscopy at photon energies higher than 15 keV we design refractive condenser and imaging elements to be used with synchrotron light sources as well as with x-ray tube sources. The condenser lenses are optimized for low x-ray attenuation—resulting in apertures greater than 1 mm—and homogeneous intensity distribution on the detector plane, whereas the imaging enables high-resolution (<100 nm) full-field imaging. To obtain high image quality at reasonable exposure times, custom-tailored matched pairs of condenser and imaging lenses are being developed. The imaging lenses (compound refractive lenses, CRLs) are made of SU-8 negative resist by deep x-ray lithography. SU-8 shows high radiation stability. The fabrication technique enables high-quality lens structures regarding surface roughness and arrangement precision with arbitrary 2D geometry. To provide point foci, crossed pairs of lenses are used. Condenser lenses have been made utilizing deep x-ray lithographic patterning of thick SU-8 layers, too, whereas in this case, the aperture is limited due to process restrictions. Thus, in terms of large apertures, condenser lenses made of structured and rolled polyimide film are more attractive. Both condenser types, x-ray mosaic lenses and rolled x-ray prism lenses (RXPLs), are considered to be implemented into a microscope setup. The x-ray optical elements mentioned above are characterized with synchrotron radiation and x-ray laboratory sources, respectively.

  3. Integrated system for automated financial document processing

    NASA Astrophysics Data System (ADS)

    Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai

    1997-02-01

    A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.

  4. The Direct Lighting Computation in Global Illumination Methods

    NASA Astrophysics Data System (ADS)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  5. The IPAC Image Subtraction and Discovery Pipeline for the Intermediate Palomar Transient Factory

    NASA Astrophysics Data System (ADS)

    Masci, Frank J.; Laher, Russ R.; Rebbapragada, Umaa D.; Doran, Gary B.; Miller, Adam A.; Bellm, Eric; Kasliwal, Mansi; Ofek, Eran O.; Surace, Jason; Shupe, David L.; Grillmair, Carl J.; Jackson, Ed; Barlow, Tom; Yan, Lin; Cao, Yi; Cenko, S. Bradley; Storrie-Lombardi, Lisa J.; Helou, George; Prince, Thomas A.; Kulkarni, Shrinivas R.

    2017-01-01

    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, bogus candidates from processing artifacts and imperfect image subtractions outnumber real transients by ≃10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of ≃97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.

  6. The IPAC Image Subtraction and Discovery Pipeline for the Intermediate Palomar Transient Factory

    NASA Technical Reports Server (NTRS)

    Masci, Frank J.; Laher, Russ R.; Rebbapragada, Umaa D.; Doran, Gary B.; Miller, Adam A.; Bellm, Eric; Kasliwal, Mansi; Ofek, Eran O.; Surace, Jason; Shupe, David L.; hide

    2016-01-01

    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, bogus candidates from processing artifacts and imperfect image subtractions outnumber real transients by approximately equal to 10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of approximately equal to 97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.

  7. StreakDet data processing and analysis pipeline for space debris optical observations

    NASA Astrophysics Data System (ADS)

    Virtanen, Jenni; Flohrer, Tim; Muinonen, Karri; Granvik, Mikael; Torppa, Johanna; Poikonen, Jonne; Lehti, Jussi; Santti, Tero; Komulainen, Tuomo; Naranen, Jyri

    We describe a novel data processing and analysis pipeline for optical observations of space debris. The monitoring of space object populations requires reliable acquisition of observational data, to support the development and validation of space debris environment models, the build-up and maintenance of a catalogue of orbital elements. In addition, data is needed for the assessment of conjunction events and for the support of contingency situations or launches. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a “track before detect” problem, resulting in streaks, i.e., object trails of arbitrary lengths, in the images. The scope of the ESA-funded StreakDet (Streak detection and astrometric reduction) project is to investigate solutions for detecting and reducing streaks from optical images, particularly in the low signal-to-noise ratio (SNR) domain, where algorithms are not readily available yet. For long streaks, the challenge is to extract precise position information and related registered epochs with sufficient precision. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, there is a need to discuss and compare these approaches for space debris analysis, in order to develop and evaluate prototype implementations. In the StreakDet project, we develop algorithms applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The proposed processing pipeline starts from the segmentation of the acquired image (i.e., the extraction of all sources), followed by the astrometric and photometric characterization of the candidate streaks, and ends with orbital validation of the detected streaks. A central concept of the pipeline is streak classification which guides the actual characterization process by aiming to identify the interesting sources and to filter out the uninteresting ones, as well as by allowing the tailoring of algorithms for specific streak classes (e.g. point-like vs. long, disintegrated streaks). To validate the single-image detections, the processing is finalized by orbital analysis, resulting in preliminary orbital classification (Earth-bound vs. non-Earth-bound orbit) for the detected streaks.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silvani, M. I.; Almeida, G. L.; Lopes, R. T.

    Radiographic images acquired with point-like gamma-ray sources exhibit a desirable low penumbra effects specially when positioned far away from the set object-detector. Such an arrangement frequently is not affordable due to the limited flux provided by a distant source. A closer source, however, has two main drawbacks, namely the degradation of the spatial resolution - as actual sources are only approximately punctual - and the non-homogeneity of the beam hitting the detector, which creates a false attenuation map of the object being inspected. This non-homogeneity is caused by the beam divergence itself and by the different thicknesses traversed the beammore » even if the object were an homogeneous flat plate. In this work, radiographic images of objects with different geometries, such as flat plates and pipes have undergone a correction of beam divergence and attenuation addressing the experimental verification of the capability and soundness of an algorithm formerly developed to generate and process synthetic images. The impact of other parameters, including source-detector gap, attenuation coefficient, ratio defective-to-main hull thickness and counting statistics have been assessed for specifically tailored test-objects aiming at the evaluation of the ability of the proposed method to deal with different boundary conditions. All experiments have been carried out with an X-ray sensitive Imaging Plate and reactor-produced {sup 198}Au and {sup 165}Dy sources. The results have been compared with other technique showing a better capability to correct the attenuation map of inspected objects unveiling their inner structure otherwise concealed by the poor contrast caused by the beam divergence and attenuation, in particular for those regions far apart from the vertical of the source.« less

  9. ScipionCloud: An integrative and interactive gateway for large scale cryo electron microscopy image processing on commercial and academic clouds.

    PubMed

    Cuenca-Alba, Jesús; Del Cano, Laura; Gómez Blanco, Josué; de la Rosa Trevín, José Miguel; Conesa Mingo, Pablo; Marabini, Roberto; S Sorzano, Carlos Oscar; Carazo, Jose María

    2017-10-01

    New instrumentation for cryo electron microscopy (cryoEM) has significantly increased data collection rate as well as data quality, creating bottlenecks at the image processing level. Current image processing model of moving the acquired images from the data source (electron microscope) to desktops or local clusters for processing is encountering many practical limitations. However, computing may also take place in distributed and decentralized environments. In this way, cloud is a new form of accessing computing and storage resources on demand. Here, we evaluate on how this new computational paradigm can be effectively used by extending our current integrative framework for image processing, creating ScipionCloud. This new development has resulted in a full installation of Scipion both in public and private clouds, accessible as public "images", with all the required preinstalled cryoEM software, just requiring a Web browser to access all Graphical User Interfaces. We have profiled the performance of different configurations on Amazon Web Services and the European Federated Cloud, always on architectures incorporating GPU's, and compared them with a local facility. We have also analyzed the economical convenience of different scenarios, so cryoEM scientists have a clearer picture of the setup that is best suited for their needs and budgets. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Generation of Digital Surface Models from satellite photogrammetry: the DSM-OPT service of the ESA Geohazards Exploitation Platform (GEP)

    NASA Astrophysics Data System (ADS)

    Stumpf, André; Michéa, David; Malet, Jean-Philippe

    2017-04-01

    The continuously increasing fleet of agile stereo-capable very-high resolution (VHR) optical satellites has facilitated the acquisition of multi-view images of the earth surface. Theoretical revisit times have been reduced to less than one day and the highest spatial resolution which is commercially available amounts now to 30 cm/pixel. Digital Surface Models (DSM) and point clouds computed from such satellite stereo-acquisitions can provide valuable input for studies in geomorphology, tectonics, glaciology, hydrology and urban remote sensing The photogrammetric processing, however, still requires significant expertise, computational resources and costly commercial software. To enable a large Earth Science community (researcher and end-users) to process easily and rapidly VHR multi-view images, the work targets the implementation of a fully automatic satellite-photogrammetry pipeline (i.e DSM-OPT) on the ESA Geohazards Exploitation Platform (GEP). The implemented pipeline is based on the open-source photogrammetry library MicMac [1] and is designed for distributed processing on a cloud-based infrastructure. The service can be employed in pre-defined processing modes (i.e. urban, plain, hilly, and mountainous environments) or in an advanced processing mode (i.e. in which expert-users have the possibility to adapt the processing parameters to their specific applications). Four representative use cases are presented to illustrate the accuracy of the resulting surface models and ortho-images as well as the overall processing time. These use cases consisted of the construction of surface models from series of Pléiades images for four applications: urban analysis (Strasbourg, France), landslide detection in mountainous environments (South French Alps), co-seismic deformation in mountain environments (Central Italy earthquake sequence of 2016) and fault recognition for paleo-tectonic analysis (North-East India). Comparisons of the satellite-derived topography to airborne LiDAR topography are discussed. [1] Rupnik, E., Pierrot Deseilligny, M., Delorme, A., and Klinger, Y.: Refined satellite image orientation in the free open-source photogrammetric tools APERO/MICMAC, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-1, 83-90, doi:10.5194/isprs-annals-III-1-83-2016, 2016.

  11. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    PubMed Central

    Abbasi, Arash; Berry, Jeffrey C.; Callen, Steven T.; Chavez, Leonardo; Doust, Andrew N.; Feldman, Max J.; Gilbert, Kerrigan B.; Hodge, John G.; Hoyer, J. Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning. PMID:29209576

  12. PlantCV v2: Image analysis software for high-throughput plant phenotyping.

    PubMed

    Gehan, Malia A; Fahlgren, Noah; Abbasi, Arash; Berry, Jeffrey C; Callen, Steven T; Chavez, Leonardo; Doust, Andrew N; Feldman, Max J; Gilbert, Kerrigan B; Hodge, John G; Hoyer, J Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.

  13. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  14. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE PAGES

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash; ...

    2017-12-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  15. Smartphone Cortex Controlled Real-Time Image Processing and Reprocessing for Concentration Independent LED Induced Fluorescence Detection in Capillary Electrophoresis.

    PubMed

    Szarka, Mate; Guttman, Andras

    2017-10-17

    We present the application of a smartphone anatomy based technology in the field of liquid phase bioseparations, particularly in capillary electrophoresis. A simple capillary electrophoresis system was built with LED induced fluorescence detection and a credit card sized minicomputer to prove the concept of real time fluorescent imaging (zone adjustable time-lapse fluorescence image processor) and separation controller. The system was evaluated by analyzing under- and overloaded aminopyrenetrisulfonate (APTS)-labeled oligosaccharide samples. The open source software based image processing tool allowed undistorted signal modulation (reprocessing) if the signal was inappropriate for the actual detection system settings (too low or too high). The novel smart detection tool for fluorescently labeled biomolecules greatly expands dynamic range and enables retrospective correction for injections with unsuitable signal levels without the necessity to repeat the analysis.

  16. Image analysis for quantification of bacterial rock weathering.

    PubMed

    Puente, M Esther; Rodriguez-Jaramillo, M Carmen; Li, Ching Y; Bashan, Yoav

    2006-02-01

    A fast, quantitative image analysis technique was developed to assess potential rock weathering by bacteria. The technique is based on reduction in the surface area of rock particles and counting the relative increase in the number of small particles in ground rock slurries. This was done by recording changes in ground rock samples with an electronic image analyzing process. The slurries were previously amended with three carbon sources, ground to a uniform particle size and incubated with rock weathering bacteria for 28 days. The technique was developed and tested, using two rock-weathering bacteria Pseudomonas putida R-20 and Azospirillum brasilense Cd on marble, granite, apatite, quartz, limestone, and volcanic rock as substrates. The image analyzer processed large number of particles (10(7)-10(8) per sample), so that the weathering capacity of bacteria can be detected.

  17. Spectrally and Radiometrically Stable Wide-Band on Board Calibration Source for In-Flight Data Validation in Imaging Spectroscopy Applications

    NASA Technical Reports Server (NTRS)

    Coles, J. B.; Richardson, Brandon S.; Eastwood, Michael L.; Sarture, Charles M.; Quetin, Gregory R.; Hernandez, Marco A.; Kroll, Linley A.; Nolte, Scott H.; Porter, Michael D.; Green, Robert O.

    2011-01-01

    The quality of the quantitative spectral data collected by an imaging spectrometer instrument is critically dependent upon the accuracy of the spectral and radiometric calibration of the system. In order for the collected spectra to be scientifically useful, the calibration of the instrument must be precisely known not only prior to but during data collection. Thus, in addition to a rigorous in-lab calibration procedure, the airborne instruments designed and built by the NASA/JPL Imaging Spectroscopy Group incorporate an on board calibrator (OBC) system with the instrument to provide auxiliary in-use system calibration data. The output of the OBC source illuminates a target panel on the backside of the foreoptics shutter both before and after data collection. The OBC and in-lab calibration data sets are then used to validate and post-process the collected spectral image data. The resulting accuracy of the spectrometer output data is therefore integrally dependent upon the stability of the OBC source. In this paper we describe the design and application of the latest iteration of this novel device developed at NASA/JPL which integrates a halogen-cycle source with a precisely designed fiber coupling system and a fiber-based intensity monitoring feedback loop. The OBC source in this Airborne Testbed Spectrometer was run over a period of 15 hours while both the radiometric and spectral stabilities of the output were measured and demonstrated stability to within 1% of nominal.

  18. Imaging an Event Horizon: Mitigation of Source Variability of Sagittarius A*

    NASA Astrophysics Data System (ADS)

    Lu, Ru-Sen; Roelofs, Freek; Fish, Vincent L.; Shiokawa, Hotaka; Doeleman, Sheperd S.; Gammie, Charles F.; Falcke, Heino; Krichbaum, Thomas P.; Zensus, J. Anton

    2016-02-01

    The black hole in the center of the Galaxy, associated with the compact source Sagittarius A* (Sgr A*), is predicted to cast a shadow upon the emission of the surrounding plasma flow, which encodes the influence of general relativity (GR) in the strong-field regime. The Event Horizon Telescope (EHT) is a Very Long Baseline Interferometry (VLBI) network with a goal of imaging nearby supermassive black holes (in particular Sgr A* and M87) with angular resolution sufficient to observe strong gravity effects near the event horizon. General relativistic magnetohydrodynamic (GRMHD) simulations show that radio emission from Sgr A* exhibits variability on timescales of minutes, much shorter than the duration of a typical VLBI imaging experiment, which usually takes several hours. A changing source structure during the observations, however, violates one of the basic assumptions needed for aperture synthesis in radio interferometry imaging to work. By simulating realistic EHT observations of a model movie of Sgr A*, we demonstrate that an image of the average quiescent emission, featuring the characteristic black hole shadow and photon ring predicted by GR, can nonetheless be obtained by observing over multiple days and subsequent processing of the visibilities (scaling, averaging, and smoothing) before imaging. Moreover, it is shown that this procedure can be combined with an existing method to mitigate the effects of interstellar scattering. Taken together, these techniques allow the black hole shadow in the Galactic center to be recovered on the reconstructed image.

  19. Technical Note: DIRART – A software suite for deformable image registration and adaptive radiotherapy research

    PubMed Central

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.

    2011-01-01

    Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods:DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing∕registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. PMID:21361176

  20. X-Ray Processing of ChaMPlane Fields: Methods and Initial Results for Selected Anti-Galactic Center Fields

    NASA Astrophysics Data System (ADS)

    Hong, JaeSub; van den Berg, Maureen; Schlegel, Eric M.; Grindlay, Jonathan E.; Koenig, Xavier; Laycock, Silas; Zhao, Ping

    2005-12-01

    We describe the X-ray analysis procedure of the ongoing Chandra Multiwavelength Plane (ChaMPlane) Survey and report the initial results from the analysis of 15 selected anti-Galactic center observations (90deg

  1. Inselect: Automating the Digitization of Natural History Collections

    PubMed Central

    Hudson, Lawrence N.; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W.; van der Walt, Stéfan; Smith, Vincent S.

    2015-01-01

    The world’s natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect—a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization. PMID:26599208

  2. Inselect: Automating the Digitization of Natural History Collections.

    PubMed

    Hudson, Lawrence N; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W; van der Walt, Stéfan; Smith, Vincent S

    2015-01-01

    The world's natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect-a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization.

  3. Real time diffuse reflectance polarisation spectroscopy imaging to evaluate skin microcirculation

    NASA Astrophysics Data System (ADS)

    O'Doherty, Jim; Henricson, Joakim; Nilsson, Gert E.; Anderson, Chris; Leahy, Martin J.

    2007-07-01

    This article describes the theoretical development and design of a real-time microcirculation imaging system, an extension from a previously technology developed by our group. The technology utilises polarisation spectroscopy, a technique used in order to selectively gate photons returning from various compartments of human skin tissue, namely from the superficial layers of the epidermis, and the deeper backscattered light from the dermal matrix. A consumer-end digital camcorder captures colour data with three individual CCDs, and a custom designed light source consisting of a 24 LED ring light provides broadband illumination over the 400 nm - 700 nm wavelength region. Theory developed leads to an image processing algorithm, the output of which scales linearly with increasing red blood cell (RBC) concentration. Processed images are displayed online in real-time at a rate of 25 frames s -1, at a frame size of 256 x 256 pixels, and is limited only by computer RAM memory and processing speed. General demonstrations of the technique in vivo display several advantages over similar technology.

  4. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  5. Selection and quality assessment of Landsat data for the North American forest dynamics forest history maps of the US

    USGS Publications Warehouse

    Schleeweis, Karen; Goward, Samuel N.; Huang, Chengquan; Dwyer, John L.; Dungan, Jennifer L.; Lindsey, Mary A.; Michaelis, Andrew; Rishmawi, Khaldoun; Masek, Jeffery G.

    2016-01-01

    Using the NASA Earth Exchange platform, the North American Forest Dynamics (NAFD) project mapped forest history wall-to-wall, annually for the contiguous US (1986–2010) using the Vegetation Change Tracker algorithm. As with any effort to identify real changes in remotely sensed time-series, data gaps, shifts in seasonality, misregistration, inconsistent radiometry and cloud contamination can be sources of error. We discuss the NAFD image selection and processing stream (NISPS) that was designed to minimize these sources of error. The NISPS image quality assessments highlighted issues with the Landsat archive and metadata including inadequate georegistration, unreliability of the pre-2009 L5 cloud cover assessments algorithm, missing growing-season imagery and paucity of clear views. Assessment maps of Landsat 5–7 image quantities and qualities are presented that offer novel perspectives on the growing-season archive considered for this study. Over 150,000+ Landsat images were considered for the NAFD project. Optimally, one high quality cloud-free image in each year or a total of 12,152 images would be used. However, to accommodate data gaps and cloud/shadow contamination 23,338 images were needed. In 220 specific path-row image years no acceptable images were found resulting in data gaps in the annual national map products.

  6. Patient-controlled sharing of medical imaging data across unaffiliated healthcare organizations

    PubMed Central

    Ahn, David K; Unde, Bhagyashree; Gage, H Donald; Carr, J Jeffrey

    2013-01-01

    Background Current image sharing is carried out by manual transportation of CDs by patients or organization-coordinated sharing networks. The former places a significant burden on patients and providers. The latter faces challenges to patient privacy. Objective To allow healthcare providers efficient access to medical imaging data acquired at other unaffiliated healthcare facilities while ensuring strong protection of patient privacy and minimizing burden on patients, providers, and the information technology infrastructure. Methods An image sharing framework is described that involves patients as an integral part of, and with full control of, the image sharing process. Central to this framework is the Patient Controlled Access-key REgistry (PCARE) which manages the access keys issued by image source facilities. When digitally signed by patients, the access keys are used by any requesting facility to retrieve the associated imaging data from the source facility. A centralized patient portal, called a PCARE patient control portal, allows patients to manage all the access keys in PCARE. Results A prototype of the PCARE framework has been developed by extending open-source technology. The results for feasibility, performance, and user assessments are encouraging and demonstrate the benefits of patient-controlled image sharing. Discussion The PCARE framework is effective in many important clinical cases of image sharing and can be used to integrate organization-coordinated sharing networks. The same framework can also be used to realize a longitudinal virtual electronic health record. Conclusion The PCARE framework allows prior imaging data to be shared among unaffiliated healthcare facilities while protecting patient privacy with minimal burden on patients, providers, and infrastructure. A prototype has been implemented to demonstrate the feasibility and benefits of this approach. PMID:22886546

  7. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org

    2015-10-15

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less

  8. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.

    PubMed

    Chen, Ming; Yu, Hengyong

    2015-10-01

    This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  9. Image processing in biodosimetry: A proposal of a generic free software platform.

    PubMed

    Dumpelmann, Matthias; Cadena da Matta, Mariel; Pereira de Lemos Pinto, Marcela Maria; de Salazar E Fernandes, Thiago; Borges da Silva, Edvane; Amaral, Ademir

    2015-08-01

    The scoring of chromosome aberrations is the most reliable biological method for evaluating individual exposure to ionizing radiation. However, microscopic analyses of chromosome human metaphases, generally employed to identify aberrations mainly dicentrics (chromosome with two centromeres), is a laborious task. This method is time consuming and its application in biological dosimetry would be almost impossible in case of a large scale radiation incidents. In this project, a generic software was enhanced for automatic chromosome image processing from a framework originally developed for the Framework V project Simbio, of the European Union for applications in the area of source localization from electroencephalographic signals. The platforms capability is demonstrated by a study comparing automatic segmentation strategies of chromosomes from microscopic images.

  10. Systems and methods for optically measuring properties of hydrocarbon fuel gases

    DOEpatents

    Adler-Golden, S.; Bernstein, L.S.; Bien, F.; Gersh, M.E.; Goldstein, N.

    1998-10-13

    A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution. 14 figs.

  11. Systems and methods for optically measuring properties of hydrocarbon fuel gases

    DOEpatents

    Adler-Golden, Steven; Bernstein, Lawrence S.; Bien, Fritz; Gersh, Michael E.; Goldstein, Neil

    1998-10-13

    A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution.

  12. Method of fabricating an imaging X-ray spectrometer

    NASA Technical Reports Server (NTRS)

    Alcorn, G. E. (Inventor); Burgess, A. S. (Inventor)

    1986-01-01

    A process for fabricating an X-ray spectrometer having imaging and energy resolution of X-ray sources is discussed. The spectrometer has an array of adjoinging rectangularly shaped detector cells formed in a silicon body. The walls of the cells are created by laser drilling holes completely through the silicon body and diffusing n(+) phosphorous doping material therethrough. A thermally migrated aluminum electrode is formed centrally through each of the cells.

  13. Viewing Welds By Computer Tomography

    NASA Technical Reports Server (NTRS)

    Pascua, Antonio G.; Roy, Jagatjit

    1990-01-01

    Computer tomography system used to inspect welds for root penetration. Source illuminates rotating welded part with fan-shaped beam of x rays or gamma rays. Detectors in circular array on opposite side of part intercept beam and convert it into electrical signals. Computer processes signals into image of cross section of weld. Image displayed on video monitor. System offers only nondestructive way to check penetration from outside when inner surfaces inaccessible.

  14. Enhancing the Utility of the Rapid Environmental Assessment Database Through the Use of In Situ and Modelled Data Sets During Q316

    DTIC Science & Technology

    2008-11-01

    17ºC; red: 17-18ºC. Although the image produced in Figure 9 is useful, the image itself is not the most important aspect of the process . Two...climatology for the Scotian Shelf. The database is intended for use while ashore and also while at-sea. Trial Q316 was the maiden voyage of the database...to the process of data transfer from external sources to the database, and also how the database can be restructured to be more accommodating of

  15. A New Effort for Atmospherical Forecast: Meteorological Image Processing Software (MIPS) for Astronomical Observations

    NASA Astrophysics Data System (ADS)

    Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.

    2016-12-01

    We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.

  16. SET: a pupil detection method using sinusoidal approximation

    PubMed Central

    Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili

    2015-01-01

    Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641

  17. Electrical Neuroimaging of Music Processing Reveals Mid-Latency Changes with Level of Musical Expertise

    PubMed Central

    James, Clara E.; Oechslin, Mathias S.; Michel, Christoph M.; De Pretto, Michael

    2017-01-01

    This original research focused on the effect of musical training intensity on cerebral and behavioral processing of complex music using high-density event-related potential (ERP) approaches. Recently we have been able to show progressive changes with training in gray and white matter, and higher order brain functioning using (f)MRI [(functional) Magnetic Resonance Imaging], as well as changes in musical and general cognitive functioning. The current study investigated the same population of non-musicians, amateur pianists and expert pianists using spatio-temporal ERP analysis, by means of microstate analysis, and ERP source imaging. The stimuli consisted of complex musical compositions containing three levels of transgression of musical syntax at closure that participants appraised. ERP waveforms, microstates and underlying brain sources revealed gradual differences according to musical expertise in a 300–500 ms window after the onset of the terminal chords of the pieces. Within this time-window, processing seemed to concern context-based memory updating, indicated by a P3b-like component or microstate for which underlying sources were localized in the right middle temporal gyrus, anterior cingulate and right parahippocampal areas. Given that the 3 expertise groups were carefully matched for demographic factors, these results provide evidence of the progressive impact of training on brain and behavior. PMID:29163017

  18. Electrical Neuroimaging of Music Processing Reveals Mid-Latency Changes with Level of Musical Expertise.

    PubMed

    James, Clara E; Oechslin, Mathias S; Michel, Christoph M; De Pretto, Michael

    2017-01-01

    This original research focused on the effect of musical training intensity on cerebral and behavioral processing of complex music using high-density event-related potential (ERP) approaches. Recently we have been able to show progressive changes with training in gray and white matter, and higher order brain functioning using (f)MRI [(functional) Magnetic Resonance Imaging], as well as changes in musical and general cognitive functioning. The current study investigated the same population of non-musicians, amateur pianists and expert pianists using spatio-temporal ERP analysis, by means of microstate analysis, and ERP source imaging. The stimuli consisted of complex musical compositions containing three levels of transgression of musical syntax at closure that participants appraised. ERP waveforms, microstates and underlying brain sources revealed gradual differences according to musical expertise in a 300-500 ms window after the onset of the terminal chords of the pieces. Within this time-window, processing seemed to concern context-based memory updating, indicated by a P3b-like component or microstate for which underlying sources were localized in the right middle temporal gyrus, anterior cingulate and right parahippocampal areas. Given that the 3 expertise groups were carefully matched for demographic factors, these results provide evidence of the progressive impact of training on brain and behavior.

  19. The connectome mapper: an open-source processing pipeline to map connectomes with MRI.

    PubMed

    Daducci, Alessandro; Gerhard, Stephan; Griffa, Alessandra; Lemkaddem, Alia; Cammoun, Leila; Gigandet, Xavier; Meuli, Reto; Hagmann, Patric; Thiran, Jean-Philippe

    2012-01-01

    Researchers working in the field of global connectivity analysis using diffusion magnetic resonance imaging (MRI) can count on a wide selection of software packages for processing their data, with methods ranging from the reconstruction of the local intra-voxel axonal structure to the estimation of the trajectories of the underlying fibre tracts. However, each package is generally task-specific and uses its own conventions and file formats. In this article we present the Connectome Mapper, a software pipeline aimed at helping researchers through the tedious process of organising, processing and analysing diffusion MRI data to perform global brain connectivity analyses. Our pipeline is written in Python and is freely available as open-source at www.cmtk.org.

  20. Research for Key Techniques of Geophysical Recognition System of Hydrocarbon-induced Magnetic Anomalies Based on Hydrocarbon Seepage Theory

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Hao, T.; Zhao, B.

    2009-12-01

    Hydrocarbon seepage effects can cause magnetic alteration zones in near surface, and the magnetic anomalies induced by the alteration zones can thus be used to locate oil-gas potential regions. In order to reduce the inaccuracy and multi-resolution of the hydrocarbon anomalies recognized only by magnetic data, and to meet the requirement of integrated management and sythetic analysis of multi-source geoscientfic data, it is necessary to construct a recognition system that integrates the functions of data management, real-time processing, synthetic evaluation, and geologic mapping. In this paper research for the key techniques of the system is discussed. Image processing methods can be applied to potential field images so as to make it easier for visual interpretation and geological understanding. For gravity or magnetic images, the anomalies with identical frequency-domain characteristics but different spatial distribution will reflect differently in texture and relevant textural statistics. Texture is a description of structural arrangements and spatial variation of a dataset or an image, and has been applied in many research fields. Textural analysis is a procedure that extracts textural features by image processing methods and thus obtains a quantitative or qualitative description of texture. When the two kinds of anomalies have no distinct difference in amplitude or overlap in frequency spectrum, they may be distinguishable due to their texture, which can be considered as textural contrast. Therefore, for the recognition system we propose a new “magnetic spots” recognition method based on image processing techniques. The method can be divided into 3 major steps: firstly, separate local anomalies caused by shallow, relatively small sources from the total magnetic field, and then pre-process the local magnetic anomaly data by image processing methods such that magnetic anomalies can be expressed as points, lines and polygons with spatial correlation, which includes histogram-equalization based image display, object recognition and extraction; then, mine the spatial characteristics and correlations of the magnetic anomalies using textural statistics and analysis, and study the features of known anomalous objects (closures, hydrocarbon-bearing structures, igneous rocks, etc.) in the same research area; finally, classify the anomalies, cluster them according to their similarity, and predict hydrocarbon induced “magnetic spots” combined with geologic, drilling and rock core data. The system uses the ArcGIS as the secondary development platform, inherits the basic functions of the ArcGIS, and develops two main sepecial functional modules, the module for conventional potential-field data processing methods and the module for feature extraction and enhancement based on image processing and analysis techniques. The system can be applied to realize the geophysical detection and recognition of near-surface hydrocarbon seepage anomalies, provide technical support for locating oil-gas potential regions, and promote geophysical data processing and interpretation to advance more efficiently.

  1. Integrated Analysis Platform: An Open-Source Information System for High-Throughput Plant Phenotyping1[C][W][OPEN

    PubMed Central

    Klukas, Christian; Chen, Dijun; Pape, Jean-Michel

    2014-01-01

    High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays ‘Fernandez’) plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. PMID:24760818

  2. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  3. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.

  4. On the use of a laser ablation as a laboratory seismic source

    NASA Astrophysics Data System (ADS)

    Shen, Chengyi; Brito, Daniel; Diaz, Julien; Zhang, Deyuan; Poydenot, Valier; Bordes, Clarisse; Garambois, Stéphane

    2017-04-01

    Mimic near-surface seismic imaging conducted in well-controlled laboratory conditions is potentially a powerful tool to study large scale wave propagations in geological media by means of upscaling. Laboratory measurements are indeed particularly suited for tests of theoretical modellings and comparisons with numerical approaches. We have developed an automated Laser Doppler Vibrometer (LDV) platform, which is able to detect and register broadband nano-scale displacements on the surface of various materials. This laboratory equipment has already been validated in experiments where piezoelectric transducers were used as seismic sources. We are currently exploring a new seismic source in our experiments, a laser ablation, in order to compensate some drawbacks encountered with piezoelectric sources. The laser ablation source is considered to be an interesting ultrasound wave generator since the 1960s. It was believed to have numerous potential applications such as the Non-Destructive Testing (NDT) and the measurements of velocities and attenuations in solid samples. We aim at adapting and developing this technique into geophysical experimental investigations in order to produce and explore complete micro-seismic data sets in the laboratory. We will first present the laser characteristics including its mechanism, stability, reproducibility, and will evaluate in particular the directivity patterns of such a seismic source. We have started by applying the laser ablation source on the surfaces of multi-scale homogeneous aluminum samples and are now testing it on heterogeneous and fractured limestone cores. Some other results of data processing will also be shown, especially the 2D-slice V P and V S tomographic images obtained in limestone samples. Apart from the experimental records, numerical simulations will be carried out for both the laser source modelling and the wave propagation in different media. First attempts will be done to compare quantitatively the experimental data with simulations. Meanwhile, CT-scan X-ray images of these limestone cores will be used to check the relative pertinences of velocity tomography images produced by this newly developed laser ablation seismic source.

  5. Instantaneous Shoreline Extraction Utilizing Integrated Spectrum and Shadow Analysis From LiDAR Data and High-resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Lee, I.-Chieh

    Shoreline delineation and shoreline change detection are expensive processes in data source acquisition and manual shoreline delineation. These costs confine the frequency and interval of shoreline mapping periods. In this dissertation, a new shoreline delineation approach was developed targeting on lowering the data source cost and reducing human labor. To lower the cost of data sources, we used the public domain LiDAR data sets and satellite images to delineate shorelines without the requirement of data sets being acquired simultaneously, which is a new concept in this field. To reduce the labor cost, we made improvements in classifying LiDAR points and satellite images. Analyzing shadow relations with topography to improve the satellite image classification performance is also a brand-new concept. The extracted shoreline of the proposed approach could achieve an accuracy of 1.495 m RMSE, or 4.452m at the 95% confidence level. Consequently, the proposed approach could successfully lower the cost and shorten the processing time, in other words, to increase the shoreline mapping frequency with a reasonable accuracy. However, the extracted shoreline may not compete with the shoreline extracted by aerial photogrammetric procedures in the aspect of accuracy. Hence, this is a trade-off between cost and accuracy. This approach consists of three phases, first, a shoreline extraction procedure based mainly on LiDAR point cloud data with multispectral information from satellite images. Second, an object oriented shoreline extraction procedure to delineate shoreline solely from satellite images; in this case WorldView-2 images were used. Third, a shoreline integration procedure combining these two shorelines based on actual shoreline changes and physical terrain properties. The actual data source cost would only be from the acquisition of satellite images. On the other hand, only two processes needed human attention. First, the shoreline within harbor areas needed to be manually connected, for its length was less than 3% of the total shoreline length in our dataset. Secondly, the parameters for satellite image classification needed to be manually determined. The need for manpower was significantly less compared to the ground surveying or aerial photogrammetry. The first phase of shoreline extraction was to utilize Normalized Difference Vegetation Index (NDVI), Mean-Shift segmentation on the coordinate (X, Y, Z), and attributes (multispectral bands from satellite images) of the LiDAR points to classify each LiDAR point into land or water surface. Boundary of the land points were then traced to create the shoreline. The second phase of shoreline extraction solely from satellite images utilized spectrum, NDVI, and shadow analysis to classify the satellite images into classes. These classes were then refined by mean-shift segmentation on the panchromatic band. By tracing the boundary of the water surface, the shoreline can be created. Since these two shorelines may represent different shoreline instances in time, evaluating the changes of shoreline was the first to be done. Then an independent scenario analysis and a procedure are performed for the shoreline of each of the three conditions: in the process of erosion, in the process of accession, and remaining the same. With these three conditions, we could analysis the actual terrain type and correct the classification errors to obtain a more accurate shoreline. Meanwhile, methods of evaluating the quality of shorelines had also been discussed. The experiment showed that there were three indicators could best represent the quality of the shoreline. These indicators were: (1) shoreline accuracy, (2) land area difference between extracted shoreline and ground truth shoreline, and (3) bias factor from shoreline quality metrics.

  6. Self-calibration for lensless color microscopy.

    PubMed

    Flasseur, Olivier; Fournier, Corinne; Verrier, Nicolas; Denis, Loïc; Jolivet, Frédéric; Cazier, Anthony; Lépine, Thierry

    2017-05-01

    Lensless color microscopy (also called in-line digital color holography) is a recent quantitative 3D imaging method used in several areas including biomedical imaging and microfluidics. By targeting cost-effective and compact designs, the wavelength of the low-end sources used is known only imprecisely, in particular because of their dependence on temperature and power supply voltage. This imprecision is the source of biases during the reconstruction step. An additional source of error is the crosstalk phenomenon, i.e., the mixture in color sensors of signals originating from different color channels. We propose to use a parametric inverse problem approach to achieve self-calibration of a digital color holographic setup. This process provides an estimation of the central wavelengths and crosstalk. We show that taking the crosstalk phenomenon into account in the reconstruction step improves its accuracy.

  7. A Comparison of Earthquake Back-Projection Imaging Methods for Dense Local Arrays, and Application to the 2011 Virginia Aftershock Sequence

    NASA Astrophysics Data System (ADS)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Michaelides, M.; Brown, L. D.; Quiros, D. A.

    2016-12-01

    Back-projection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. Back-projection is scalable to earthquakes with a wide range of magnitudes from very tiny to very large. Local dense arrays provide the opportunity to capture very tiny events for a range applications, such as tectonic microseismicity, source scaling studies, wastewater injection-induced seismicity, hydraulic fracturing, CO2 injection monitoring, volcano studies, and mining safety. While back-projection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed to overcome imaging issues. We compare the performance of back-projection using four previously used data pre-processing methods: full waveform, envelope, short-term averaging / long-term averaging (STA/LTA), and kurtosis. The goal is to identify an optimized strategy for an entirely automated imaging process that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the energy imaged at the source, preserves magnitude information, and considers computational cost. Real data issues include aliased station spacing, low signal-to-noise ratio (to <1), large noise bursts and spatially varying waveform polarity. For evaluation, the four imaging methods were applied to the aftershock sequence of the 2011 Virginia earthquake as recorded by the AIDA array with 200-400 m station spacing. These data include earthquake magnitudes from -2 to 3 with highly variable signal to noise, spatially aliased noise, and large noise bursts: realistic issues in many environments. Each of the four back-projection methods has advantages and disadvantages, and a combined multi-pass method achieves the best of all criteria. Preliminary imaging results from the 2011 Virginia dataset will be presented.

  8. Method and apparatus for acoustic imaging of objects in water

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2005-01-25

    A method, system and underwater camera for acoustic imaging of objects in water or other liquids includes an acoustic source for generating an acoustic wavefront for reflecting from a target object as a reflected wavefront. The reflected acoustic wavefront deforms a screen on an acoustic side and correspondingly deforms the opposing optical side of the screen. An optical processing system is optically coupled to the optical side of the screen and converts the deformations on the optical side of the screen into an optical intensity image of the target object.

  9. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  10. Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, G. Patrick; Browne, Jolyon

    The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.

  11. Point spread functions for earthquake source imaging: An interpretation based on seismic interferometry

    USGS Publications Warehouse

    Nakahara, Hisashi; Haney, Matt

    2015-01-01

    Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.

  12. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization.

    PubMed

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  13. Streamlined, Inexpensive 3D Printing of the Brain and Skull

    PubMed Central

    Cash, Sydney S.

    2015-01-01

    Neuroimaging technologies such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) collect three-dimensional data (3D) that is typically viewed on two-dimensional (2D) screens. Actual 3D models, however, allow interaction with real objects such as implantable electrode grids, potentially improving patient specific neurosurgical planning and personalized clinical education. Desktop 3D printers can now produce relatively inexpensive, good quality prints. We describe our process for reliably generating life-sized 3D brain prints from MRIs and 3D skull prints from CTs. We have integrated a standardized, primarily open-source process for 3D printing brains and skulls. We describe how to convert clinical neuroimaging Digital Imaging and Communications in Medicine (DICOM) images to stereolithography (STL) files, a common 3D object file format that can be sent to 3D printing services. We additionally share how to convert these STL files to machine instruction gcode files, for reliable in-house printing on desktop, open-source 3D printers. We have successfully printed over 19 patient brain hemispheres from 7 patients on two different open-source desktop 3D printers. Each brain hemisphere costs approximately $3–4 in consumable plastic filament as described, and the total process takes 14–17 hours, almost all of which is unsupervised (preprocessing = 4–6 hr; printing = 9–11 hr, post-processing = <30 min). Printing a matching portion of a skull costs $1–5 in consumable plastic filament and takes less than 14 hr, in total. We have developed a streamlined, cost-effective process for 3D printing brain and skull models. We surveyed healthcare providers and patients who confirmed that rapid-prototype patient specific 3D models may help interdisciplinary surgical planning and patient education. The methods we describe can be applied for other clinical, research, and educational purposes. PMID:26295459

  14. Method for Assessment of Changes in the Width of Cracks in Cement Composites with Use of Computer Image Processing and Analysis

    NASA Astrophysics Data System (ADS)

    Tomczak, Kamil; Jakubowski, Jacek; Fiołek, Przemysław

    2017-06-01

    Crack width measurement is an important element of research on the progress of self-healing cement composites. Due to the nature of this research, the method of measuring the width of cracks and their changes over time must meet specific requirements. The article presents a novel method of measuring crack width based on images from a scanner with an optical resolution of 6400 dpi, subject to initial image processing in the ImageJ development environment and further processing and analysis of results. After registering a series of images of the cracks at different times using SIFT conversion (Scale-Invariant Feature Transform), a dense network of line segments is created in all images, intersecting the cracks perpendicular to the local axes. Along these line segments, brightness profiles are extracted, which are the basis for determination of crack width. The distribution and rotation of the line of intersection in a regular layout, automation of transformations, management of images and profiles of brightness, and data analysis to determine the width of cracks and their changes over time are made automatically by own code in the ImageJ and VBA environment. The article describes the method, tests on its properties, sources of measurement uncertainty. It also presents an example of application of the method in research on autogenous self-healing of concrete, specifically the ability to reduce a sample crack width and its full closure within 28 days of the self-healing process.

  15. Brainstorm: A User-Friendly Application for MEG/EEG Analysis

    PubMed Central

    Tadel, François; Baillet, Sylvain; Mosher, John C.; Pantazis, Dimitrios; Leahy, Richard M.

    2011-01-01

    Brainstorm is a collaborative open-source application dedicated to magnetoencephalography (MEG) and electroencephalography (EEG) data visualization and processing, with an emphasis on cortical source estimation techniques and their integration with anatomical magnetic resonance imaging (MRI) data. The primary objective of the software is to connect MEG/EEG neuroscience investigators with both the best-established and cutting-edge methods through a simple and intuitive graphical user interface (GUI). PMID:21584256

  16. Development of high definition OCT system for clinical therapy of skin diseases

    NASA Astrophysics Data System (ADS)

    Baek, Daeyul; Seo, Young-Seok; Kim, Jung-Hyun

    2018-02-01

    OCT is a non-invasive imaging technique that can be applied to diagnose various skin disease. Since its introduction in 1997, dermatology has used OCT technology to obtain high quality images of human skin. Recently, in order to accurately diagnose skin diseases, it is essential to develop OCT equipment that can obtain high quality images. Therefore, we developed the system that can obtain a high quality image by using a 1300 nm light source with a wide bandwidth and deep penetration depth, high-resolution image, and a camera capable of high sensitivity and high speed processing. We introduce the performance of the developed system and the clinical application data.

  17. Resolving human object recognition in space and time

    PubMed Central

    Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2014-01-01

    A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here, we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively later. Using representational similarity analysis, we combine human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing, with sources in V1 and IT., Finally, human MEG signals were correlated to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision. PMID:24464044

  18. The 'PhenoBox', a flexible, automated, open-source plant phenotyping solution.

    PubMed

    Czedik-Eysenberg, Angelika; Seitner, Sebastian; Güldener, Ulrich; Koemeda, Stefanie; Jez, Jakub; Colombini, Martin; Djamei, Armin

    2018-04-05

    There is a need for flexible and affordable plant phenotyping solutions for basic research and plant breeding. We demonstrate our open source plant imaging and processing solution ('PhenoBox'/'PhenoPipe') and provide construction plans, source code and documentation to rebuild the system. Use of the PhenoBox is exemplified by studying infection of the model grass Brachypodium distachyon by the head smut fungus Ustilago bromivora, comparing phenotypic responses of maize to infection with a solopathogenic Ustilago maydis (corn smut) strain and effector deletion strains, and studying salt stress response in Nicotiana benthamiana. In U. bromivora-infected grass, phenotypic differences between infected and uninfected plants were detectable weeks before qualitative head smut symptoms. Based on this, we could predict the infection outcome for individual plants with high accuracy. Using a PhenoPipe module for calculation of multi-dimensional distances from phenotyping data, we observe a time after infection-dependent impact of U. maydis effector deletion strains on phenotypic response in maize. The PhenoBox/PhenoPipe system is able to detect established salt stress responses in N. benthamiana. We have developed an affordable, automated, open source imaging and data processing solution that can be adapted to various phenotyping applications in plant biology and beyond. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.

  19. Intravascular atherosclerotic imaging with combined fluorescence and optical coherence tomography probe based on a double-clad fiber combiner

    NASA Astrophysics Data System (ADS)

    Liang, Shanshan; Saidi, Arya; Jing, Joe; Liu, Gangjun; Li, Jiawen; Zhang, Jun; Sun, Changsen; Narula, Jagat; Chen, Zhongping

    2012-07-01

    We developed a multimodality fluorescence and optical coherence tomography probe based on a double-clad fiber (DCF) combiner. The probe is composed of a DCF combiner, grin lens, and micromotor in the distal end. An integrated swept-source optical coherence tomography and fluorescence intensity imaging system was developed based on the combined probe for the early diagnoses of atherosclerosis. This system is capable of real-time data acquisition and processing as well as image display. For fluorescence imaging, the inflammation of atherosclerosis and necrotic core formed with the annexin V-conjugated Cy5.5 were imaged. Ex vivo imaging of New Zealand white rabbit arteries demonstrated the capability of the combined system.

  20. Earth mapping - aerial or satellite imagery comparative analysis

    NASA Astrophysics Data System (ADS)

    Fotev, Svetlin; Jordanov, Dimitar; Lukarski, Hristo

    Nowadays, solving the tasks for revision of existing map products and creation of new maps requires making a choice of the land cover image source. The issue of the effectiveness and cost of the usage of aerial mapping systems versus the efficiency and cost of very-high resolution satellite imagery is topical [1, 2, 3, 4]. The price of any remotely sensed image depends on the product (panchromatic or multispectral), resolution, processing level, scale, urgency of task and on whether the needed image is available in the archive or has to be requested. The purpose of the present work is: to make a comparative analysis between the two approaches for mapping the Earth having in mind two parameters: quality and cost. To suggest an approach for selection of the map information sources - airplane-based or spacecraft-based imaging systems with very-high spatial resolution. Two cases are considered: area that equals approximately one satellite scene and area that equals approximately the territory of Bulgaria.

  1. MITK global tractography

    NASA Astrophysics Data System (ADS)

    Neher, Peter F.; Stieltjes, Bram; Reisert, Marco; Reicht, Ignaz; Meinzer, Hans-Peter; Fritzsche, Klaus H.

    2012-02-01

    Fiber tracking algorithms yield valuable information for neurosurgery as well as automated diagnostic approaches. However, they have not yet arrived in the daily clinical practice. In this paper we present an open source integration of the global tractography algorithm proposed by Reisert et.al.1 into the open source Medical Imaging Interaction Toolkit (MITK) developed and maintained by the Division of Medical and Biological Informatics at the German Cancer Research Center (DKFZ). The integration of this algorithm into a standardized and open development environment like MITK enriches accessibility of tractography algorithms for the science community and is an important step towards bringing neuronal tractography closer to a clinical application. The MITK diffusion imaging application, downloadable from www.mitk.org, combines all the steps necessary for a successful tractography: preprocessing, reconstruction of the images, the actual tracking, live monitoring of intermediate results, postprocessing and visualization of the final tracking results. This paper presents typical tracking results and demonstrates the steps for pre- and post-processing of the images.

  2. Applying Enhancement Filters in the Pre-processing of Images of Lymphoma

    NASA Astrophysics Data System (ADS)

    Henrique Silva, Sérgio; Zanchetta do Nascimento, Marcelo; Alves Neves, Leandro; Ramos Batista, Valério

    2015-01-01

    Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement.

  3. New Techniques for High-contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline

    NASA Astrophysics Data System (ADS)

    Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Goto, M.; Grady, C. A.; Guyon, O.; Hashimoto, J.; Hayano, Y.; Hayashi, M.; Hayashi, S.; Henning, T.; Hodapp, K. W.; Ishii, M.; Iye, M.; Janson, M.; Kandori, R.; Knapp, G. R.; Kudo, T.; Kusakabe, N.; Kuzuhara, M.; Kwon, J.; Matsuo, T.; Miyama, S.; Morino, J.-I.; Moro-Martín, A.; Nishimura, T.; Pyo, T.-S.; Serabyn, E.; Suto, H.; Suzuki, R.; Takami, M.; Takato, N.; Terada, H.; Thalmann, C.; Tomono, D.; Watanabe, M.; Wisniewski, J. P.; Yamada, T.; Takami, H.; Usuda, T.; Tamura, M.

    2013-02-01

    We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the SEEDS survey. We implement several new algorithms, including a method to register saturated images, a trimmed mean for combining an image sequence that reduces noise by up to ~20%, and a robust and computationally fast method to compute the sensitivity of a high-contrast observation everywhere on the field of view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is written in python. It is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI requires minimal modification to reduce data from instruments other than HiCIAO. It is freely available for download at www.github.com/t-brandt/acorns-adi under a Berkeley Software Distribution (BSD) license. Based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  4. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  5. Using hyperspectral imaging technology to identify diseased tomato leaves

    NASA Astrophysics Data System (ADS)

    Li, Cuiling; Wang, Xiu; Zhao, Xueguan; Meng, Zhijun; Zou, Wei

    2016-11-01

    In the process of tomato plants growth, due to the effect of plants genetic factors, poor environment factors, or disoperation of parasites, there will generate a series of unusual symptoms on tomato plants from physiology, organization structure and external form, as a result, they cannot grow normally, and further to influence the tomato yield and economic benefits. Hyperspectral image usually has high spectral resolution, not only contains spectral information, but also contains the image information, so this study adopted hyperspectral imaging technology to identify diseased tomato leaves, and developed a simple hyperspectral imaging system, including a halogen lamp light source unit, a hyperspectral image acquisition unit and a data processing unit. Spectrometer detection wavelength ranged from 400nm to 1000nm. After hyperspectral images of tomato leaves being captured, it was needed to calibrate hyperspectral images. This research used spectrum angle matching method and spectral red edge parameters discriminant method respectively to identify diseased tomato leaves. Using spectral red edge parameters discriminant method produced higher recognition accuracy, the accuracy was higher than 90%. Research results have shown that using hyperspectral imaging technology to identify diseased tomato leaves is feasible, and provides the discriminant basis for subsequent disease control of tomato plants.

  6. Watermarking and copyright labeling of printed images

    NASA Astrophysics Data System (ADS)

    Hel-Or, Hagit Z.

    2001-07-01

    Digital watermarking is a labeling technique for digital images which embeds a code into the digital data so the data are marked. Watermarking techniques previously developed deal with on-line digital data. These techniques have been developed to withstand digital attacks such as image processing, image compression and geometric transformations. However, one must also consider the readily available attack of printing and scanning. The available watermarking techniques are not reliable under printing and scanning. In fact, one must consider the availability of watermarks for printed images as well as for digital images. An important issue is to intercept and prevent forgery in printed material such as currency notes, back checks, etc. and to track and validate sensitive and secrete printed material. Watermarking in such printed material can be used not only for verification of ownership but as an indicator of date and type of transaction or date and source of the printed data. In this work we propose a method of embedding watermarks in printed images by inherently taking advantage of the printing process. The method is visually unobtrusive to the printed image, the watermark is easily extracted and is robust under reconstruction errors. The decoding algorithm is automatic given the watermarked image.

  7. Image counter-forensics based on feature injection

    NASA Astrophysics Data System (ADS)

    Iuliani, M.; Rossetto, S.; Bianchi, T.; De Rosa, Alessia; Piva, A.; Barni, M.

    2014-02-01

    Starting from the concept that many image forensic tools are based on the detection of some features revealing a particular aspect of the history of an image, in this work we model the counter-forensic attack as the injection of a specific fake feature pointing to the same history of an authentic reference image. We propose a general attack strategy that does not rely on a specific detector structure. Given a source image x and a target image y, the adversary processes x in the pixel domain producing an attacked image ~x, perceptually similar to x, whose feature f(~x) is as close as possible to f(y) computed on y. Our proposed counter-forensic attack consists in the constrained minimization of the feature distance Φ(z) =│ f(z) - f(y)│ through iterative methods based on gradient descent. To solve the intrinsic limit due to the numerical estimation of the gradient on large images, we propose the application of a feature decomposition process, that allows the problem to be reduced into many subproblems on the blocks the image is partitioned into. The proposed strategy has been tested by attacking three different features and its performance has been compared to state-of-the-art counter-forensic methods.

  8. SU-E-I-11: A New Cone-Beam CT System for Bedside Head Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, H; Zeng, W; Xu, P

    Purpose: To design and develop a new mobile cone-beam CT (CBCT) system for head imaging with good soft-tissue visibility, to be used bedside in ICU and neurosurgery department to monitor treatment and operation outcome in brain patients. Methods: The imaging chain consists of a 30cmx25cm amorphous silicon flat panel detector and a pulsed, stationary anode monoblock x-ray source of 100kVp at a maximal tube current of 10mA. The detector and source are supported on motorized mechanisms to provide detector lateral shift and source angular tilt, enabling a centered digital radiographic imaging mode and half-fan CBCT, while maximizing the use ofmore » the x-ray field and keep the source to detector distance short. A focused linear anti-scatter grid is mounted on the detector, and commercial software with scatter and other corrective algorithms is used for data processing and image reconstruction. The gantry rotates around a horizontal axis, and is able to adjust its height for different patient table positions. Cables are routed through a custom protective sleeve over a large bore with an in-plane twister band, facilitating single 360-degree rotation without a slip-ring at a speed up to 5 seconds per rotation. A UPS provides about 10 minutes of operation off the battery when unplugged. The gantry is on locked casters, whose brake is control by two push handles on both sides for easy reposition. The entire system is designed to have a light weight and a compact size for excellent maneuverability. Results: System design is complete and main imaging components are tested. Initial results will be presented and discussed later in the presentation. Conclusion: A new mobile CBCT system for head imaging is being developed. With its compact size, a large bore, and quality design, it is expected to be a useful imaging tool for bedside uses. The work is supported by a grant from Chinese Academy of Sciences.« less

  9. Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.

    PubMed

    Ding, Lei; Yuan, Han

    2013-04-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.

  10. Three-dimensional displacement measurement of image point by point-diffraction interferometry

    NASA Astrophysics Data System (ADS)

    He, Xiao; Chen, Lingfeng; Meng, Xiaojie; Yu, Lei

    2018-01-01

    This paper presents a method for measuring the three-dimensional (3-D) displacement of an image point based on point-diffraction interferometry. An object Point-light-source (PLS) interferes with a fixed PLS and its interferograms are captured by an exit pupil. When the image point of the object PLS is slightly shifted to a new position, the wavefront of the image PLS changes. And its interferograms also change. Processing these figures (captured before and after the movement), the wavefront difference of the image PLS can be obtained and it contains the information of three-dimensional (3-D) displacement of the image PLS. However, the information of its three-dimensional (3-D) displacement cannot be calculated until the distance between the image PLS and the exit pupil is calibrated. Therefore, we use a plane-parallel-plate with a known refractive index and thickness to determine this distance, which is based on the Snell's law for small angle of incidence. Thus, since the distance between the exit pupil and the image PLS is a known quantity, the 3-D displacement of the image PLS can be simultaneously calculated through two interference measurements. Preliminary experimental results indicate that its relative error is below 0.3%. With the ability to accurately locate an image point (whatever it is real or virtual), a fiber point-light-source can act as the reticle by itself in optical measurement.

  11. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  12. Scan Line Difference Compression Algorithm Simulation Study.

    DTIC Science & Technology

    1985-08-01

    introduced during the signal transmission process. ----------- SLDC Encoder------- I Image I IConditionedl IConditioned I LError Control I I Source I...I Error Control _____ _struction - Decoder I I Decoder I ----------- SLDC Decoder-------- Figure A-I. -- Overall Data Compression Process This...of noise or an effective channel coding subsystem providing the necessary error control . A- 2 ~~~~~~~~~ ..* : ~ -. . .- .** - .. . .** .* ... . . The

  13. Retinal Information Processing for Minimum Laser Lesion Detection and Cumulative Damage

    DTIC Science & Technology

    1992-09-17

    TAL3Unaqr~orJ:ccd [] J ,;--Wicic tion --------------... MYRON....... . ................... ... ....... ...........................MYRON L. WOLBARSHT B D ist...possible beneficial visual function of the small retinal image movements. B . Visual System Models Prior models of visual system information processing have...against standard secondary sources whose calibrations can be traced to the National Bureau of Standards. B . Electrophysiological Techniques Extracellular

  14. Shaking the Trees: The Psychology of Collecting in U.S. Newspaper Coverage of the College Admissions Process

    ERIC Educational Resources Information Center

    Bishop, Ronald

    2009-01-01

    A frame analysis was conducted to explore themes in recent coverage by print journalists of the college application process, with special attention paid to the use by reporters of "keywords, stock phrases, stereotyped images, sources of information, and sentences that provide reinforcing clusters of facts or judgments" (Entman, p. 52) about this…

  15. Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments.

    PubMed

    Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco

    2017-10-27

    Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.

  16. Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments

    PubMed Central

    Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco

    2017-01-01

    Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis. PMID:29077071

  17. Simultaneous in vivo imaging of melanin and lipofuscin in the retina with multimodal photoacoustic ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangyang; Zhang, Hao F.; Zhou, Lixiang; Jiao, Shuliang

    2012-02-01

    We combined photoacoustic ophthalmoscopy (PAOM) with autofluorescence imaging for simultaneous in vivo imaging of dual molecular contrasts in the retina using a single light source. The dual molecular contrasts come from melanin and lipofuscin in the retinal pigment epithelium (RPE). Melanin and lipofuscin are two types of pigments and are believed to play opposite roles (protective vs. exacerbate) in the RPE in the aging process. We successfully imaged the retina of pigmented and albino rats at different ages. The experimental results showed that multimodal PAOM system can be a potentially powerful tool in the study of age-related degenerative retinal diseases.

  18. Symmetrical group theory for mathematical complexity reduction of digital holograms

    NASA Astrophysics Data System (ADS)

    Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.

    2017-10-01

    This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.

  19. Image-based tracking of the suturing needle during laparoscopic interventions

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kroehnert, A.; Bodenstedt, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2015-03-01

    One of the most complex and difficult tasks for surgeons during minimally invasive interventions is suturing. A prerequisite to assist the suturing process is the tracking of the needle. The endoscopic images provide a rich source of information which can be used for needle tracking. In this paper, we present an image-based method for markerless needle tracking. The method uses a color-based and geometry-based segmentation to detect the needle. Once an initial needle detection is obtained, a region of interest enclosing the extracted needle contour is passed on to a reduced segmentation. It is evaluated with in vivo images from da Vinci interventions.

  20. A new approach for reducing beam hardening artifacts in polychromatic X-ray computed tomography using more accurate prior image.

    PubMed

    Wang, Hui; Xu, Yanan; Shi, Hongli

    2018-03-15

    Metal artifacts severely degrade CT image quality in clinical diagnosis, which are difficult to removed, especially for the beam hardening artifacts. The metal artifact reduction (MAR) based on prior images are the most frequently-used methods. However, there exists a lot misclassification in most prior images caused by absence of prior information such as spectrum distribution of X-ray beam source, especially when multiple or big metal are included. This work aims is to identify a more accurate prior image to improve image quality. The proposed method includes four steps. First, the metal image is segmented by thresholding an initial image, where the metal traces are identified in the initial projection data using the forward projection of the metal image. Second, the accurate absorbent model of certain metal image is calculated according to the spectrum distribution of certain X-ray beam source and energy-dependent attenuation coefficients of metal. Third, a new metal image is reconstructed by the general analytical reconstruction algorithm such as filtered back projection (FPB). The prior image is obtained by segmenting the difference image between the initial image and the new metal image into air, tissue and bone. Fourth, the initial projection data are normalized by dividing the projection data of prior image pixel to pixel. The final corrected image is obtained by interpolation, denormalization and reconstruction. Several clinical images with dental fillings and knee prostheses were used to evaluate the proposed algorithm and normalized metal artifact reduction (NMAR) and linear interpolation (LI) method. The results demonstrate the artifacts were reduced efficiently by the proposed method. The proposed method could obtain an exact prior image using the prior information about X-ray beam source and energy-dependent attenuation coefficients of metal. As a result, better performance of reducing beam hardening artifacts can be achieved. Moreover, the process of the proposed method is rather simple and little extra calculation burden is necessary. It has superiorities over other algorithms when include multiple and/or big implants.

Top