Sample records for image processing unit

  1. Maritime Domain Awareness: C4I for the 1000 Ship Navy

    DTIC Science & Technology

    2009-12-04

    unit action, provide unit sensed contacts, coordinate unit operations, process unit information, release image , and release contact report, Figure 33...Intelligence Tasking Request Intelligence Summary Release Unit Person Incident Release Unit Vessel Incident Process Intelligence Tasking Release Image ...xi LIST OF FIGURES Figure 1. Functional Problem Sequence Process Flow. ....................................................4 Figure 2. United

  2. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-08

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  3. Model for mapping settlements

    DOEpatents

    Vatsavai, Ranga Raju; Graesser, Jordan B.; Bhaduri, Budhendra L.

    2016-07-05

    A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches.

  4. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    PubMed

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  5. Image forming apparatus

    DOEpatents

    Satoh, Hisao; Haneda, Satoshi; Ikeda, Tadayoshi; Morita, Shizuo; Fukuchi, Masakazu

    1996-01-01

    In an image forming apparatus having a detachable process cartridge in which an image carrier on which an electrostatic latent image is formed, and a developing unit which develops the electrostatic latent image so that a toner image can be formed, both integrally formed into one unit. There is provided a developer container including a discharge section which can be inserted into a supply opening of the developing unit, and a container in which a predetermined amount of developer is contained, wherein the developer container is provided to the toner supply opening of the developing unit and the developer is supplied into the developing unit housing when a toner stirring screw of the developing unit is rotated.

  6. Evaluation of the effects of the seasonal variation of solar elevation angle and azimuth on the processes of digital filtering and thematic classification of relief units

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.

    1983-01-01

    The effects of the seasonal variation of illumination over digital processing of LANDSAT images are evaluated. Two sets of LANDSAT data referring to the orbit 150 and row 28 were selected with illumination parameters varying from 43 deg to 64 deg for azimuth and from 30 deg to 36 deg for solar elevation respectively. IMAGE-100 system permitted the digital processing of LANDSAT data. Original images were transformed by means of digital filtering so as to enhance their spatial features. The resulting images were used to obtain an unsupervised classification of relief units. Topographic variables (declivity, altitude, relief range and slope length) were used to identify the true relief units existing on the ground. The LANDSAT over pass data show that digital processing is highly affected by illumination geometry, and there is no correspondence between relief units as defined by spectral features and those resulting from topographic features.

  7. Compact hybrid optoelectrical unit for image processing and recognition

    NASA Astrophysics Data System (ADS)

    Cheng, Gang; Jin, Guofan; Wu, Minxian; Liu, Haisong; He, Qingsheng; Yuan, ShiFu

    1998-07-01

    In this paper a compact opto-electric unit (CHOEU) for digital image processing and recognition is proposed. The central part of CHOEU is an incoherent optical correlator, which is realized with a SHARP QA-1200 8.4 inch active matrix TFT liquid crystal display panel which is used as two real-time spatial light modulators for both the input image and reference template. CHOEU can do two main processing works. One is digital filtering; the other is object matching. Using CHOEU an edge-detection operator is realized to extract the edges from the input images. Then the reprocessed images are sent into the object recognition unit for identifying the important targets. A novel template- matching method is proposed for gray-tome image recognition. A positive and negative cycle-encoding method is introduced to realize the absolute difference measurement pixel- matching on a correlator structure simply. The system has god fault-tolerance ability for rotation distortion, Gaussian noise disturbance or information losing. The experiments are given at the end of this paper.

  8. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  9. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    PubMed

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  10. Evaluation of solar angle variation over digital processing of LANDSAT imagery. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.

    1984-01-01

    The effects of the seasonal variation of illumination over digital processing of LANDSAT images are evaluated. Original images are transformed by means of digital filtering to enhance their spatial features. The resulting images are used to obtain an unsupervised classification of relief units. After defining relief classes, which are supposed to be spectrally different, topographic variables (declivity, altitude, relief range and slope length) are used to identify the true relief units existing on the ground. The samples are also clustered by means of an unsupervised classification option. The results obtained for each LANDSAT overpass are compared. Digital processing is highly affected by illumination geometry. There is no correspondence between relief units as defined by spectral features and those resulting from topographic features.

  11. The integrated design and archive of space-borne signal processing and compression coding

    NASA Astrophysics Data System (ADS)

    He, Qiang-min; Su, Hao-hang; Wu, Wen-bo

    2017-10-01

    With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.

  12. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    PubMed

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  13. A dual-channel fusion system of visual and infrared images based on color transfer

    NASA Astrophysics Data System (ADS)

    Pei, Chuang; Jiang, Xiao-yu; Zhang, Peng-wei; Liang, Hao-cong

    2013-09-01

    A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can't meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.

  14. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  15. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  16. The Design, Implementation, and Evaluation of a Digital Interactive Globe System Integrated into an Earth Science Course

    ERIC Educational Resources Information Center

    Liou, Wei-Kai; Bhagat, Kaushal Kumar; Chang, Chun-Yen

    2018-01-01

    The aim of this study is to design and implement a digital interactive globe system (DIGS), by integrating low-cost equipment to make DIGS cost-effective. DIGS includes a data processing unit, a wireless control unit, an image-capturing unit, a laser emission unit, and a three-dimensional hemispheric body-imaging screen. A quasi-experimental study…

  17. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  18. Usability of small impact craters on small surface areas in crater count dating: Analysing examples from the Harmakhis Vallis outflow channel, Mars

    NASA Astrophysics Data System (ADS)

    Kukkonen, S.; Kostama, V.-P.

    2018-05-01

    The availability of very high-resolution images has made it possible to extend crater size-frequency distribution studies to small, deca/hectometer-scale craters. This has enabled the dating of small and young surface units, as well as recent, short-time and small-scale geologic processes that have occurred on the units. Usually, however, the higher the spatial resolution of space images is, the smaller area is covered by the images. Thus the use of single, very high-resolution images in crater count age determination may be debatable if the images do not cover the studied region entirely. Here we compare the crater count results for the floor of the Harmakhis Vallis outflow channel obtained from the images of the ConTeXt camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) aboard the Mars Reconnaissance Orbiter (MRO). The CTX images enable crater counts for entire units on the Harmakhis Vallis main valley, whereas the coverage of the higher-resolution HiRISE images is limited and thus the images can only be used to date small parts of the units. Our case study shows that the crater count data based on small impact craters and small surface areas mainly correspond with the crater count data based on larger craters and more extensive counting areas on the same unit. If differences between the results were founded, they could usually be explained by the regional geology. Usually, these differences appeared when at least one cratering model age is missing from either of the crater datasets. On the other hand, we found only a few cases in which the cratering model ages were completely different. We conclude that the crater counts using small impact craters on small counting areas provide useful information about the geological processes which have modified the surface. However, it is important to remember that all the crater counts results obtained from a specific counting area always primarily represent the results from the counting area-not the whole unit. On the other hand, together with crater count results from extensive counting areas and lower-resolution images, crater counts on small counting areas but by using very high-resolution images is a very valuable tool for obtaining unique additional information about the local processes on the surface units.

  19. Real-time blood flow visualization using the graphics processing unit

    NASA Astrophysics Data System (ADS)

    Yang, Owen; Cuccia, David; Choi, Bernard

    2011-01-01

    Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ~10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.

  20. Real-time blood flow visualization using the graphics processing unit

    PubMed Central

    Yang, Owen; Cuccia, David; Choi, Bernard

    2011-01-01

    Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark. PMID:21280915

  1. Computer Vision for Artificially Intelligent Robotic Systems

    NASA Astrophysics Data System (ADS)

    Ma, Chialo; Ma, Yung-Lung

    1987-04-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Model, we use a narrow beam transducer and it's input voltage is 50V p-p. A RobOt equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.

  2. Improved Performance Characteristics For Indium Antimonide Photovoltaic Detector Arrays Using A FET-Switched Multiplexing Technique

    NASA Astrophysics Data System (ADS)

    Ma, Yung-Lung; Ma, Chialo

    1987-03-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts _ position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed by the main control unit. In Pulse-Echo Signal Process Unit, we utilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by p law coding method, and this data together with delay time T, angle information eH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Models, we use a narrow beam transducer and it's input voltage is 50V p-p. A Robot equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.

  3. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    NASA Astrophysics Data System (ADS)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  4. Using hyperspectral imaging technology to identify diseased tomato leaves

    NASA Astrophysics Data System (ADS)

    Li, Cuiling; Wang, Xiu; Zhao, Xueguan; Meng, Zhijun; Zou, Wei

    2016-11-01

    In the process of tomato plants growth, due to the effect of plants genetic factors, poor environment factors, or disoperation of parasites, there will generate a series of unusual symptoms on tomato plants from physiology, organization structure and external form, as a result, they cannot grow normally, and further to influence the tomato yield and economic benefits. Hyperspectral image usually has high spectral resolution, not only contains spectral information, but also contains the image information, so this study adopted hyperspectral imaging technology to identify diseased tomato leaves, and developed a simple hyperspectral imaging system, including a halogen lamp light source unit, a hyperspectral image acquisition unit and a data processing unit. Spectrometer detection wavelength ranged from 400nm to 1000nm. After hyperspectral images of tomato leaves being captured, it was needed to calibrate hyperspectral images. This research used spectrum angle matching method and spectral red edge parameters discriminant method respectively to identify diseased tomato leaves. Using spectral red edge parameters discriminant method produced higher recognition accuracy, the accuracy was higher than 90%. Research results have shown that using hyperspectral imaging technology to identify diseased tomato leaves is feasible, and provides the discriminant basis for subsequent disease control of tomato plants.

  5. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  6. [Quality control of laser imagers].

    PubMed

    Winkelbauer, F; Ammann, M; Gerstner, N; Imhof, H

    1992-11-01

    Multiformat imagers based on laser systems are used for documentation in an increasing number of investigations. The specific problems of quality control are explained and the persistence of film processing in these imager systems of different configuration with (Machine 1: 3M-Laser-Imager-Plus M952 with connected 3M Film-Processor, 3M-Film IRB, X-Rax Chemical Mixer 3M-XPM, 3M-Developer and Fixer) or without (Machine 2: 3M-Laser-Imager-Plus M952 with separate DuPont-Cronex Film-processor, Kodak IR-Film, Kodak Automixer, Kodak-Developer and Fixer) connected film processing unit are investigated. In our checking based on DIN 6868 and ONORM S 5240 we found persistence of film processing in the equipment with directly adapted film processing unit according to DIN and ONORM. The checking of film persistence as demanded by DIN 6868 in these equipment could therefore be performed in longer periods. Systems with conventional darkroom processing comparatively show plain increased fluctuation, and hence the demanded daily control is essential to guarantee appropriate reaction and constant quality of documentation.

  7. Global Pressure- and Temperature-Measurements in 1.27-m JAXA Hypersonic Wind Tunnel

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Miyazaki, T.; Nakagawa, M.; Tsuda, S.; Sakaue, H.

    Pressure-sensitive paint (PSP) technique has been widely used in aerodynamic measurements. A PSP is a global optical sensor, which consists of a luminophore and binding material. The luminophore gives a luminescence related to an oxygen concentration known as oxygen quenching. In an aerodynamic measurement, the oxygen concentration is related to a partial pressure of oxygen and a static pressure, thus the luminescent signal can be related to a static pressure [1]. The PSP measurement system consists of a PSP coated model, an image acquisition unit, and an image processing unit (Fig. 1). For the image acquisition, an illumination source and a photo-detector are required. To separate the illumination and PSP emission detected by a photo-detector, appropriate band-pass filters are placed in front of the illumination and photo-detector. The image processing unit includes the calibration and computation. The calibration relates the luminescent signal to pressures and temperatures. Based on these calibrations, luminescent images are converted to a pressure map.

  8. A data distributed parallel algorithm for ray-traced volume rendering

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.

    1993-01-01

    This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.

  9. GPU Accelerated Ultrasonic Tomography Using Propagation and Back Propagation Method

    DTIC Science & Technology

    2015-09-28

    the medical imaging field using GPUs has been done for many years. In [1], Copeland et al. used 2D images , obtained by X - ray projections, to...Index Terms— Medical Imaging , Ultrasonic Tomography, GPU, CUDA, Parallel Computing I. INTRODUCTION GRAPHIC Processing Units (GPUs) are computation... Imaging Algorithm The process of reconstructing images from ultrasonic infor- mation starts with the following acoustical wave equation: ∂2 ∂t2 u ( x

  10. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms

    PubMed Central

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831

  11. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.

    PubMed

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.

  12. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  13. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  14. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  15. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  16. 39 CFR 3050.24 - Documentation supporting estimates of costs avoided by worksharing and other mail characteristics...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Computer Reader finalization costs, cost per image, and Remote Bar Code Sorter leakage; (8) Percentage of... processing units costs for Carrier Route, High Density, and Saturation mail; (j) Mail processing unit costs...

  17. Architecture and data processing alternatives for Tse computer. Volume 1: Tse logic design concepts and the development of image processing machine architectures

    NASA Technical Reports Server (NTRS)

    Rickard, D. A.; Bodenheimer, R. E.

    1976-01-01

    Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.

  18. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  19. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  20. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  1. 21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...

  2. A Closed-Loop Proportional-Integral (PI) Control Software for Fully Mechanically Controlled Automated Electron Microscopic Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    REN, GANG; LIU, JINXIN; LI, HONGCHANG

    A closed-loop proportional-integral (PI) control software is provided for fully mechanically controlled automated electron microscopic tomography. The software is developed based on Gatan DigitalMicrograph, and is compatible with Zeiss LIBRA 120 transmission electron microscope. However, it can be expanded to other TEM instrument with modification. The software consists of a graphical user interface, a digital PI controller, an image analyzing unit, and other drive units (i.e.: image acquire unit and goniometer drive unit). During a tomography data collection process, the image analyzing unit analyzes both the accumulated shift and defocus value of the latest acquired image, and provides the resultsmore » to the digital PI controller. The digital PI control compares the results with the preset values and determines the optimum adjustments of the goniometer. The goniometer drive unit adjusts the spatial position of the specimen according to the instructions given by the digital PI controller for the next tilt angle and image acquisition. The goniometer drive unit achieves high precision positioning by using a backlash elimination method. The major benefits of the software are: 1) the goniometer drive unit keeps pre-aligned/optimized beam conditions unchanged and achieves position tracking solely through mechanical control; 2) the image analyzing unit relies on only historical data and therefore does not require additional images/exposures; 3) the PI controller enables the system to dynamically track the imaging target with extremely low system error.« less

  3. High efficient optical remote sensing images acquisition for nano-satellite-framework

    NASA Astrophysics Data System (ADS)

    Li, Feng; Xin, Lei; Liu, Yang; Fu, Jie; Liu, Yuhong; Guo, Yi

    2017-09-01

    It is more difficult and challenging to implement Nano-satellite (NanoSat) based optical Earth observation missions than conventional satellites because of the limitation of volume, weight and power consumption. In general, an image compression unit is a necessary onboard module to save data transmission bandwidth and disk space. The image compression unit can get rid of redundant information of those captured images. In this paper, a new image acquisition framework is proposed for NanoSat based optical Earth observation applications. The entire process of image acquisition and compression unit can be integrated in the photo detector array chip, that is, the output data of the chip is already compressed. That is to say, extra image compression unit is no longer needed; therefore, the power, volume, and weight of the common onboard image compression units consumed can be largely saved. The advantages of the proposed framework are: the image acquisition and image compression are combined into a single step; it can be easily built in CMOS architecture; quick view can be provided without reconstruction in the framework; Given a certain compression ratio, the reconstructed image quality is much better than those CS based methods. The framework holds promise to be widely used in the future.

  4. (abstract) Topographic Signatures in Geology

    NASA Technical Reports Server (NTRS)

    Farr, Tom G.; Evans, Diane L.

    1996-01-01

    Topographic information is required for many Earth Science investigations. For example, topography is an important element in regional and global geomorphic studies because it reflects the interplay between the climate-driven processes of erosion and the tectonic processes of uplift. A number of techniques have been developed to analyze digital topographic data, including Fourier texture analysis. A Fourier transform of the topography of an area allows the spatial frequency content of the topography to be analyzed. Band-pass filtering of the transform produces images representing the amplitude of different spatial wavelengths. These are then used in a multi-band classification to map units based on their spatial frequency content. The results using a radar image instead of digital topography showed good correspondence to a geologic map, however brightness variations in the image unrelated to topography caused errors. An additional benefit to the use of Fourier band-pass images for the classification is that the textural signatures of the units are quantative measures of the spatial characteristics of the units that may be used to map similar units in similar environments.

  5. Analysis of very-high-resolution Galileo images of Europa: Implications for small-scale structure and surface evolution

    NASA Astrophysics Data System (ADS)

    Leonard, E. J.; Pappalardo, R. T.; Yin, A.; Prockter, L. M.; Patthoff, D. A.

    2014-12-01

    The Galileo Solid State Imager (SSI) recorded nine very high-resolution frames (8 at 12 m/pixel and 1 at 6 m/pixel) during the E12 flyby of Europa in Dec. 1997. To understand the implications for the small-scale structure and evolution of Europa, we mosaicked these frames (observations 12ESMOTTLE01 and 02, incidence ≈18°, emission ≈77°) into their regional context (part of observation 11ESREGMAP01, 220 m/pixel, incidence ≈74°, emission ≈23°), despite their very different viewing and lighting conditions. We created a map of geological units based on morphology, structure, and albedo along with stereoscopic images where the frames overlapped. The highly diverse units range from: high albedo sub-parallel ridge and grooved terrain; to variegated-albedo hummocky terrain; to low albedo and relatively smooth terrain. We classified and analyzed the diverse units solely based on the high-resolution image mosaic, prior to comparison to the context image, to obtain an in-depth look at possible surface evolution and underlying formational processes. We infer that some of these units represent different stages and forms of resurfacing, including cryovolcanic and tectonic resurfacing. However, significant morphological variation among units in the region indicates that there are different degrees of resurfacing at work. We have created candidate morphological sequences that provide insight into the conversion of ridged plains to chaotic terrain—generally, a process of subduing formerly sharp features through tectonic modification and/or cryovolcanism. When the map of the high-resolution area is compared to the regional context, features that appear to be one unit at regional resolution are comprised of several distinct units at high resolution, and features that appear to be smooth in the context image are found to show distinct textures. Moreover, in the context image, transitions from ridged units to disrupted units appear to be gradual; however the high-resolution image reveals them to be abrupt, suggesting tectonic control of these boundaries. These discrepancies could have important implications for a future landed exploration.

  6. Obstacle penetrating dynamic radar imaging system

    DOEpatents

    Romero, Carlos E [Livermore, CA; Zumstein, James E [Livermore, CA; Chang, John T [Danville, CA; Leach, Jr Richard R. [Castro Valley, CA

    2006-12-12

    An obstacle penetrating dynamic radar imaging system for the detection, tracking, and imaging of an individual, animal, or object comprising a multiplicity of low power ultra wideband radar units that produce a set of return radar signals from the individual, animal, or object, and a processing system for said set of return radar signals for detection, tracking, and imaging of the individual, animal, or object. The system provides a radar video system for detecting and tracking an individual, animal, or object by producing a set of return radar signals from the individual, animal, or object with a multiplicity of low power ultra wideband radar units, and processing said set of return radar signals for detecting and tracking of the individual, animal, or object.

  7. Interactive brain shift compensation using GPU based programming

    NASA Astrophysics Data System (ADS)

    van der Steen, Sander; Noordmans, Herke Jan; Verdaasdonk, Rudolf

    2009-02-01

    Processing large images files or real-time video streams requires intense computational power. Driven by the gaming industry, the processing power of graphic process units (GPUs) has increased significantly. With the pixel shader model 4.0 the GPU can be used for image processing 10x faster than the CPU. Dedicated software was developed to deform 3D MR and CT image sets for real-time brain shift correction during navigated neurosurgery using landmarks or cortical surface traces defined by the navigation pointer. Feedback was given using orthogonal slices and an interactively raytraced 3D brain image. GPU based programming enables real-time processing of high definition image datasets and various applications can be developed in medicine, optics and image sciences.

  8. Radial line method for rear-view mirror distortion detection

    NASA Astrophysics Data System (ADS)

    Rahmah, Fitri; Kusumawardhani, Apriani; Setijono, Heru; Hatta, Agus M.; Irwansyah, .

    2015-01-01

    An image of the object can be distorted due to a defect in a mirror. A rear-view mirror is an important component for the vehicle safety. One of standard parameters of the rear-view mirror is a distortion factor. This paper presents a radial line method for distortion detection of the rear-view mirror. The rear-view mirror was tested for the distortion detection by using a system consisting of a webcam sensor and an image-processing unit. In the image-processing unit, the captured image from the webcam were pre-processed by using smoothing and sharpening techniques and then a radial line method was used to define the distortion factor. It was demonstrated successfully that the radial line method could be used to define the distortion factor. This detection system is useful to be implemented such as in Indonesian's automotive component industry while the manual inspection still be used.

  9. A mobile unit for memory retrieval in daily life based on image and sensor processing

    NASA Astrophysics Data System (ADS)

    Takesumi, Ryuji; Ueda, Yasuhiro; Nakanishi, Hidenobu; Nakamura, Atsuyoshi; Kakimori, Nobuaki

    2003-10-01

    We developed a Mobile Unit which purpose is to support memory retrieval of daily life. In this paper, we describe the two characteristic factors of this unit. (1)The behavior classification with an acceleration sensor. (2)Extracting the difference of environment with image processing technology. In (1), By analyzing power and frequency of an acceleration sensor which turns to gravity direction, the one's activities can be classified using some techniques to walk, stay, and so on. In (2), By extracting the difference between the beginning scene and the ending scene of a stay scene with image processing, the result which is done by user is recognized as the difference of environment. Using those 2 techniques, specific scenes of daily life can be extracted, and important information at the change of scenes can be realized to record. Especially we describe the effect to support retrieving important things, such as a thing left behind and a state of working halfway.

  10. The whole mesh deformation model: a fast image segmentation method suitable for effective parallelization

    NASA Astrophysics Data System (ADS)

    Lenkiewicz, Przemyslaw; Pereira, Manuela; Freire, Mário M.; Fernandes, José

    2013-12-01

    In this article, we propose a novel image segmentation method called the whole mesh deformation (WMD) model, which aims at addressing the problems of modern medical imaging. Such problems have raised from the combination of several factors: (1) significant growth of medical image volumes sizes due to increasing capabilities of medical acquisition devices; (2) the will to increase the complexity of image processing algorithms in order to explore new functionality; (3) change in processor development and turn towards multi processing units instead of growing bus speeds and the number of operations per second of a single processing unit. Our solution is based on the concept of deformable models and is characterized by a very effective and precise segmentation capability. The proposed WMD model uses a volumetric mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times, independently of image contents. The model also offers a good ability for topology changes and allows effective parallelization of workflow, which makes it a very good choice for large datasets. We present a precise model description, followed by experiments on artificial images and real medical data.

  11. Statistical normalization techniques for magnetic resonance imaging.

    PubMed

    Shinohara, Russell T; Sweeney, Elizabeth M; Goldsmith, Jeff; Shiee, Navid; Mateen, Farrah J; Calabresi, Peter A; Jarso, Samson; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2014-01-01

    While computed tomography and other imaging techniques are measured in absolute units with physical meaning, magnetic resonance images are expressed in arbitrary units that are difficult to interpret and differ between study visits and subjects. Much work in the image processing literature on intensity normalization has focused on histogram matching and other histogram mapping techniques, with little emphasis on normalizing images to have biologically interpretable units. Furthermore, there are no formalized principles or goals for the crucial comparability of image intensities within and across subjects. To address this, we propose a set of criteria necessary for the normalization of images. We further propose simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria. We compare the performance of different normalization methods in thousands of images of patients with Alzheimer's disease, hundreds of patients with multiple sclerosis, and hundreds of healthy subjects obtained in several different studies at dozens of imaging centers.

  12. Bio-inspired multi-mode optic flow sensors for micro air vehicles

    NASA Astrophysics Data System (ADS)

    Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik

    2013-06-01

    Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.

  13. Analysis of urban regions using AVHRR thermal infrared data

    USGS Publications Warehouse

    Wright, Bruce

    1993-01-01

    Using 1-km AVHRR satellite data, relative temperature difference caused by conductivity and inertia were used to distinguish urban and non urban land covers. AVHRR data that were composited on a biweekly basis and distributed by the EROS Data Center in Sioux Falls, South Dakota, were used for the classification process. These composited images are based on the maximum normalized different vegetation index (NDVI) of each pixel during the 2-week period using channels 1 and 2. The resultant images are nearly cloud-free and reduce the need for extensive reclassification processing. Because of the physiographic differences between the Eastern and Western United States, the initial study was limited to the eastern half of the United States. In the East, the time of maximum difference between the urban surfaces and the vegetated non urban areas is the peak greenness period in late summer. A composite image of the Eastern United States for the 2-weel time period from August 30-Septmeber 16, 1991, was used for the extraction of the urban areas. Two channels of thermal data (channels 3 and 4) normalized for regional temperature differences and a composited NDVI image were classified using conventional image processing techniques. The results compare favorably with other large-scale urban area delineations.

  14. Geometric correction of synchronous scanned Operational Modular Imaging Spectrometer II hyperspectral remote sensing images using spatial positioning data of an inertial navigation system

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaohu; Neubauer, Franz; Zhao, Dong; Xu, Shichao

    2015-01-01

    The high-precision geometric correction of airborne hyperspectral remote sensing image processing was a hard nut to crack, and conventional methods of remote sensing image processing by selecting ground control points to correct the images are not suitable in the correction process of airborne hyperspectral image. The optical scanning system of an inertial measurement unit combined with differential global positioning system (IMU/DGPS) is introduced to correct the synchronous scanned Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing images. Posture parameters, which were synchronized with the OMIS II, were first obtained from the IMU/DGPS. Second, coordinate conversion and flight attitude parameters' calculations were conducted. Third, according to the imaging principle of OMIS II, mathematical correction was applied and the corrected image pixels were resampled. Then, better image processing results were achieved.

  15. Space imaging infrared optical guidance for autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Kobayashi, Nobuaki; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu

    2008-08-01

    We have developed the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle based on the uncooled infrared camera and focusing technique to detect the objects to be evaded and to set the drive path. For this purpose we made servomotor drive system to control the focus function of the infrared camera lens. To determine the best focus position we use the auto focus image processing of Daubechies wavelet transform technique with 4 terms. From the determined best focus position we transformed it to the distance of the object. We made the aluminum frame ground vehicle to mount the auto focus infrared unit. Its size is 900mm long and 800mm wide. This vehicle mounted Ackerman front steering system and the rear motor drive system. To confirm the guidance ability of the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle we had the experiments for the detection ability of the infrared auto focus unit to the actual car on the road and the roadside wall. As a result the auto focus image processing based on the Daubechies wavelet transform technique detects the best focus image clearly and give the depth of the object from the infrared camera unit.

  16. Real time 3D structural and Doppler OCT imaging on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  17. KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA) (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, USA, (behind the stand) and NASA’s Richard Parker (seated) watch the images on a monitor to inspect for corrosion.

    NASA Image and Video Library

    2003-09-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA) (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, USA, (behind the stand) and NASA’s Richard Parker (seated) watch the images on a monitor to inspect for corrosion.

  18. KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, NASA’s Richard Parker (below left) and Peggy Ritchie, with USA, (at right) watch the images on a monitor to inspect for corrosion.

    NASA Image and Video Library

    2003-09-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, NASA’s Richard Parker (below left) and Peggy Ritchie, with USA, (at right) watch the images on a monitor to inspect for corrosion.

  19. KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, with USA, (behind the stand) and NASA’s Richard Parker watch the images on a monitor to inspect for corrosion.

    NASA Image and Video Library

    2003-09-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, while Greg Harlow, with United Space Alliance (USA), (above) threads a camera under the tiles of the orbiter Endeavour, Peggy Ritchie, with USA, (behind the stand) and NASA’s Richard Parker watch the images on a monitor to inspect for corrosion.

  20. The universal toolbox thermal imager

    NASA Astrophysics Data System (ADS)

    Hollock, Steve; Jones, Graham; Usowicz, Paul

    2003-09-01

    The Introduction of Microsoft Pocket PC 2000/2002 has seen a standardisation of the operating systems used by the majority of PDA manufacturers. This, coupled with the recent price reductions associated with these devices, has led to a rapid increase in the sales of such units; their use is now common in industrial, commercial and domestic applications throughout the world. This paper describes the results of a programme to develop a thermal imager that will interface directly to all of these units so as to take advantage of the existing and future installed base of such devices. The imager currently interfaces with virtually any Pocket PC which provides the necessary processing, display and storage capability; as an alternative, the output of the unit can be visualised and processed in real time using a PC or laptop computer. In future, the open architecture employed by this imager will allow it to support all mobile computing devices, including phones and PDAs. The imager has been designed for one-handed or two-handed operation so that it may be pointed at awkward angles or used in confined spaces; this flexibility of use coupled with the extensive feature range and exceedingly low-cost of the imager, is extending the marketplace for thermal imaging from military and professional, through industrial to the commercial and domestic marketplaces.

  1. IOTA: integration optimization, triage and analysis tool for the processing of XFEL diffraction images.

    PubMed

    Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T

    2016-06-01

    Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.

  2. A cost analysis comparing xeroradiography to film technics for intraoral radiography.

    PubMed

    Gratt, B M; Sickles, E A

    1986-01-01

    In the United States during 1978 $730 million was spent on dental radiographic services. Currently there are three alternatives for the processing of intraoral radiographs: manual wet-tanks, automatic film units, or xeroradiography. It was the intent of this study to determine which processing system is the most economical. Cost estimates were based on a usage rate of 750 patient images per month and included a calculation of the average cost per radiograph over a five-year period. Capital costs included initial processing equipment and site preparation. Operational costs included labor, supplies, utilities, darkroom rental, and breakdown costs. Clinical time trials were employed to measure examination times. Maintenance logs were employed to assess labor costs. Indirect costs of training were estimated. Results indicated that xeroradiography was the most cost effective ($0.81 per image) compared to either automatic film processing ($1.14 per image) or manual processing ($1.35 per image). Variations in projected costs indicated that if a dental practice performs primarily complete-mouth surveys, exposes less than 120 radiographs per month, and pays less than +6.50 per hour in wages, then manual (wet-tank) processing is the most economical method for producing intraoral radiographs.

  3. Accelerating image recognition on mobile devices using GPGPU

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Nykänen, Henri; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku

    2011-01-01

    The future multi-modal user interfaces of battery-powered mobile devices are expected to require computationally costly image analysis techniques. The use of Graphic Processing Units for computing is very well suited for parallel processing and the addition of programmable stages and high precision arithmetic provide for opportunities to implement energy-efficient complete algorithms. At the moment the first mobile graphics accelerators with programmable pipelines are available, enabling the GPGPU implementation of several image processing algorithms. In this context, we consider a face tracking approach that uses efficient gray-scale invariant texture features and boosting. The solution is based on the Local Binary Pattern (LBP) features and makes use of the GPU on the pre-processing and feature extraction phase. We have implemented a series of image processing techniques in the shader language of OpenGL ES 2.0, compiled them for a mobile graphics processing unit and performed tests on a mobile application processor platform (OMAP3530). In our contribution, we describe the challenges of designing on a mobile platform, present the performance achieved and provide measurement results for the actual power consumption in comparison to using the CPU (ARM) on the same platform.

  4. Medical image processing on the GPU - past, present and future.

    PubMed

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. SU-E-J-129: A Strategy to Consolidate the Image Database of a VERO Unit Into a Radiotherapy Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Y; Medin, P; Yordy, J

    2014-06-01

    Purpose: To present a strategy to integrate the imaging database of a VERO unit with a treatment management system (TMS) to improve clinical workflow and consolidate image data to facilitate clinical quality control and documentation. Methods: A VERO unit is equipped with both kV and MV imaging capabilities for IGRT treatments. It has its own imaging database behind a firewall. It has been a challenge to transfer images on this unit to a TMS in a radiation therapy clinic so that registered images can be reviewed remotely with an approval or rejection record. In this study, a software system, iPump-VERO,more » was developed to connect VERO and a TMS in our clinic. The patient database folder on the VERO unit was mapped to a read-only folder on a file server outside VERO firewall. The application runs on a regular computer with the read access to the patient database folder. It finds the latest registered images and fuses them in one of six predefined patterns before sends them via DICOM connection to the TMS. The residual image registration errors will be overlaid on the fused image to facilitate image review. Results: The fused images of either registered kV planar images or CBCT images are fully DICOM compatible. A sentinel module is built to sense new registered images with negligible computing resources from the VERO ExacTrac imaging computer. It takes a few seconds to fuse registered images and send them to the TMS. The whole process is automated without any human intervention. Conclusion: Transferring images in DICOM connection is the easiest way to consolidate images of various sources in your TMS. Technically the attending does not have to go to the VERO treatment console to review image registration prior delivery. It is a useful tool for a busy clinic with a VERO unit.« less

  6. Design and implementation of the monitoring system for underground coal fires in Xinjiang region, China

    NASA Astrophysics Data System (ADS)

    Li-bo, Dang; Jia-chun, Wu; Yue-xing, Liu; Yuan, Chang; Bin, Peng

    2017-04-01

    Underground coal fire (UCF) is serious in Xinjiang region of China. In order to deal with this problem efficiently, a UCF monitoring System, which is based on the use of wireless communication technology and remote sensing images, was designed and implemented by Xinjiang Coal Fire Fighting Bureau. This system consists of three parts, i.e., the data collecting unit, the data processing unit and the data output unit. For the data collecting unit, temperature sensors and gas sensors were put together on the sites with depth of 1.5 meter from the surface of coal fire zone. Information on these sites' temperature and gas was transferred immediately to the data processing unit. The processing unit was developed by coding based on GIS software. Generally, the processed datum were saved in the computer by table format, which can be displayed on the screen as the curve. Remote sensing image for each coal fire was saved in this system as the background for each monitoring site. From the monitoring data, the changes of the coal fires were displayed directly. And it provides a solid basis for analyzing the status of coal combustion of coal fire, the gas emission and possible dominant direction of coal fire propagation, which is helpful for making-decision of coal fire extinction.

  7. Motor unit action potential conduction velocity estimated from surface electromyographic signals using image processing techniques.

    PubMed

    Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira

    2015-09-17

    In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.

  8. A portable liquid crystal-based polarized light system for the detection of organophosphorus nerve gas.

    PubMed

    He, Feng Jie; Liu, Hui Long; Chen, Long Cong; Xiong, Xing Liang

    2018-03-01

    Liquid crystal (LC)-based sensors have the advantageous properties of being fast, sensitive, and label-free, the results of which can be accessed directly only through the naked eye. However, the inherent disadvantages possessed by LC sensors, such as relying heavily on polarizing microscopes and the difficulty to quantify, have limited the possibility of field applications. Herein, we have addressed these issues by constructing a portable polarized detection system with constant temperature control. This system is mainly composed of four parts: the LC cell, the optics unit, the automatic temperature control unit, and the image processing unit. The LC cell was based on the ordering transitions of LCs in the presence of analytes. The optics unit based on the imaging principle of LCs was designed to substitute the polarizing microscope for the real-time observation. The image processing unit is expected to quantify the concentration of analytes. The results have shown that the presented system can detect dimethyl methyl phosphonate (a stimulant for organophosphorus nerve gas) within 25 s, and the limit of detection is about 10 ppb. In all, our portable system has potential in field applications.

  9. A portable liquid crystal-based polarized light system for the detection of organophosphorus nerve gas

    NASA Astrophysics Data System (ADS)

    He, Feng Jie; Liu, Hui Long; Chen, Long Cong; Xiong, Xing Liang

    2018-03-01

    Liquid crystal (LC)-based sensors have the advantageous properties of being fast, sensitive, and label-free, the results of which can be accessed directly only through the naked eye. However, the inherent disadvantages possessed by LC sensors, such as relying heavily on polarizing microscopes and the difficulty to quantify, have limited the possibility of field applications. Herein, we have addressed these issues by constructing a portable polarized detection system with constant temperature control. This system is mainly composed of four parts: the LC cell, the optics unit, the automatic temperature control unit, and the image processing unit. The LC cell was based on the ordering transitions of LCs in the presence of analytes. The optics unit based on the imaging principle of LCs was designed to substitute the polarizing microscope for the real-time observation. The image processing unit is expected to quantify the concentration of analytes. The results have shown that the presented system can detect dimethyl methyl phosphonate (a stimulant for organophosphorus nerve gas) within 25 s, and the limit of detection is about 10 ppb. In all, our portable system has potential in field applications.

  10. A novel shape-changing haptic table-top display

    NASA Astrophysics Data System (ADS)

    Wang, Jiabin; Zhao, Lu; Liu, Yue; Wang, Yongtian; Cai, Yi

    2018-01-01

    A shape-changing table-top display with haptic feedback allows its users to perceive 3D visual and texture displays interactively. Since few existing devices are developed as accurate displays with regulatory haptic feedback, a novel attentive and immersive shape changing mechanical interface (SCMI) consisting of image processing unit and transformation unit was proposed in this paper. In order to support a precise 3D table-top display with an offset of less than 2 mm, a custommade mechanism was developed to form precise surface and regulate the feedback force. The proposed image processing unit was capable of extracting texture data from 2D picture for rendering shape-changing surface and realizing 3D modeling. The preliminary evaluation result proved the feasibility of the proposed system.

  11. Improved Imaging With Laser-Induced Eddy Currents

    NASA Technical Reports Server (NTRS)

    Chern, Engmin J.

    1993-01-01

    System tests specimen of material nondestructively by laser-induced eddy-current imaging improved by changing method of processing of eddy-current signal. Changes in impedance of eddy-current coil measured in absolute instead of relative units.

  12. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization.

    PubMed

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  13. Unified Digital Image Display And Processing System

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Maguire, Gerald Q.; Noz, Marilyn E.; Schimpf, James H.

    1981-11-01

    Our institution like many others, is faced with a proliferation of medical imaging techniques. Many of these methods give rise to digital images (e.g. digital radiography, computerized tomography (CT) , nuclear medicine and ultrasound). We feel that a unified, digital system approach to image management (storage, transmission and retrieval), image processing and image display will help in integrating these new modalities into the present diagnostic radiology operations. Future techniques are likely to employ digital images, so such a system could readily be expanded to include other image sources. We presently have the core of such a system. We can both view and process digital nuclear medicine (conventional gamma camera) images, positron emission tomography (PET) and CT images on a single system. Images from our recently installed digital radiographic unit can be added. Our paper describes our present system, explains the rationale for its configuration, and describes the directions in which it will expand.

  14. Scalable software architecture for on-line multi-camera video processing

    NASA Astrophysics Data System (ADS)

    Camplani, Massimo; Salgado, Luis

    2011-03-01

    In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.

  15. Digital image processing using parallel computing based on CUDA technology

    NASA Astrophysics Data System (ADS)

    Skirnevskiy, I. P.; Pustovit, A. V.; Abdrashitova, M. O.

    2017-01-01

    This article describes expediency of using a graphics processing unit (GPU) in big data processing in the context of digital images processing. It provides a short description of a parallel computing technology and its usage in different areas, definition of the image noise and a brief overview of some noise removal algorithms. It also describes some basic requirements that should be met by certain noise removal algorithm in the projection to computer tomography. It provides comparison of the performance with and without using GPU as well as with different percentage of using CPU and GPU.

  16. Real-time digital holographic microscopy using the graphic processing unit.

    PubMed

    Shimobaba, Tomoyoshi; Sato, Yoshikuni; Miura, Junya; Takenouchi, Mai; Ito, Tomoyoshi

    2008-08-04

    Digital holographic microscopy (DHM) is a well-known powerful method allowing both the amplitude and phase of a specimen to be simultaneously observed. In order to obtain a reconstructed image from a hologram, numerous calculations for the Fresnel diffraction are required. The Fresnel diffraction can be accelerated by the FFT (Fast Fourier Transform) algorithm. However, real-time reconstruction from a hologram is difficult even if we use a recent central processing unit (CPU) to calculate the Fresnel diffraction by the FFT algorithm. In this paper, we describe a real-time DHM system using a graphic processing unit (GPU) with many stream processors, which allows use as a highly parallel processor. The computational speed of the Fresnel diffraction using the GPU is faster than that of recent CPUs. The real-time DHM system can obtain reconstructed images from holograms whose size is 512 x 512 grids in 24 frames per second.

  17. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    NASA Astrophysics Data System (ADS)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  18. Real-time blind image deconvolution based on coordinated framework of FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun

    2015-10-01

    Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.

  19. Diagnostic report acquisition unit for the Mayo/IBM PACS project

    NASA Astrophysics Data System (ADS)

    Brooks, Everett G.; Rothman, Melvyn L.

    1991-07-01

    The Mayo Clinic and IBM Rochester have jointly developed a picture archive and control system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. One of the challenges of developing a useful PACS involves integrating the diagnostic reports with the electronic images so they can be displayed simultaneously. By the time a diagnostic report is generated for a particular case, its images have already been captured and archived by the PACS. To integrate the report with the images, the authors have developed an IBM Personal System/2 computer (PS/2) based diagnostic report acquisition unit (RAU). A typed copy of the report is transmitted via facsimile to the RAU where it is stacked electronically with other reports that have been sent previously but not yet processed. By processing these reports at the RAU, the information they contain is integrated with the image database and a copy of the report is archived electronically on an IBM Application System/400 computer (AS/400). When a user requests a set of images for viewing, the report is automatically integrated with the image data. By using a hot key, the user can toggle on/off the report on the display screen. This report describes process, hardware, and software employed to integrate the diagnostic report information into the PACS, including how the report images are captured, transmitted, and entered into the AS/400 database. Also described is how the archived reports and their associated medical images are located and merged for retrieval and display. The methods used to detect and process error conditions are also discussed.

  20. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    PubMed Central

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785

  1. Visualization and recommendation of large image collections toward effective sensemaking

    NASA Astrophysics Data System (ADS)

    Gu, Yi; Wang, Chaoli; Nemiroff, Robert; Kao, David; Parra, Denis

    2016-03-01

    In our daily lives, images are among the most commonly found data which we need to handle. We present iGraph, a graph-based approach for visual analytics of large image collections and their associated text information. Given such a collection, we compute the similarity between images, the distance between texts, and the connection between image and text to construct iGraph, a compound graph representation which encodes the underlying relationships among these images and texts. To enable effective visual navigation and comprehension of iGraph with tens of thousands of nodes and hundreds of millions of edges, we present a progressive solution that offers collection overview, node comparison, and visual recommendation. Our solution not only allows users to explore the entire collection with representative images and keywords but also supports detailed comparison for understanding and intuitive guidance for navigation. The visual exploration of iGraph is further enhanced with the implementation of bubble sets to highlight group memberships of nodes, suggestion of abnormal keywords or time periods based on text outlier detection, and comparison of four different recommendation solutions. For performance speedup, multiple graphics processing units and central processing units are utilized for processing and visualization in parallel. We experiment with two image collections and leverage a cluster driving a display wall of nearly 50 million pixels. We show the effectiveness of our approach by demonstrating experimental results and conducting a user study.

  2. Multispectral mapping of the lunar surface using groundbased telescopes

    NASA Technical Reports Server (NTRS)

    Mccord, T. B.; Pieters, C.; Feirberg, M. A.

    1976-01-01

    Images of the lunar surface were obtained at several wavelengths using a silicon vidicon imaging system and groundbased telescopes. These images were recorded and processed in digital form so that quantitative information is preserved. The photometric precision of the images is shown to be better than 1 percent. Ratio images calculated by dividing images obtained at two wavelengths (0.40/0.56 micrometer) and 0.95/0.56 micrometer are presented for about 50 percent of the lunar frontside. Spatial resolution is about 2 km at the sub-earth point. A complex of distinct units is evident in the images. Earlier work with the reflectance spectrum of lunar materials indicates that for the most part these units are compositionally distinct. Digital images of this precision are extremely useful to lunar geologists in disentangling the history of the lunar surface.

  3. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less

  4. Correlations between Cassini VIMS spectra and RADAR SAR images: Implications for Titan's surface composition and the character of the Huygens Probe Landing Site

    USGS Publications Warehouse

    Soderblom, L.A.; Kirk, R.L.; Lunine, J.I.; Anderson, J.A.; Baines, K.H.; Barnes, J.W.; Barrett, J.M.; Brown, R.H.; Buratti, B.J.; Clark, R.N.; Cruikshank, D.P.; Elachi, C.; Janssen, M.A.; Jaumann, R.; Karkoschka, E.; Le Mouélic, Stéphane; Lopes, R.M.; Lorenz, R.D.; McCord, T.B.; Nicholson, P.D.; Radebaugh, J.; Rizk, B.; Sotin, Christophe; Stofan, E.R.; Sucharski, T.L.; Tomasko, M.G.; Wall, S.D.

    2007-01-01

    Titan's vast equatorial fields of RADAR-dark longitudinal dunes seen in Cassini RADAR synthetic aperture images correlate with one of two dark surface units discriminated as "brown" and "blue" in Visible and Infrared Mapping Spectrometer (VIMS) color composites of short-wavelength infrared spectral cubes (RGB as 2.0, 1.6, 1.3 ??m). In such composites bluer materials exhibit higher reflectance at 1.3 ??m and lower at 1.6 and 2.0 ??m. The dark brown unit is highly correlated with the RADAR-dark dunes. The dark brown unit shows less evidence of water ice suggesting that the saltating grains of the dunes are largely composed of hydrocarbons and/or nitriles. In general, the bright units also show less evidence of absorption due to water ice and are inferred to consist of deposits of bright fine precipitating tholin aerosol dust. Some set of chemical/mechanical processes may be converting the bright fine-grained aerosol deposits into the dark saltating hydrocarbon and/or nitrile grains. Alternatively the dark dune materials may be derived from a different type of air aerosol photochemical product than are the bright materials. In our model, both the bright aerosol and dark hydrocarbon dune deposits mantle the VIMS dark blue water ice-rich substrate. We postulate that the bright mantles are effectively invisible (transparent) in RADAR synthetic aperture radar (SAR) images leading to lack of correlation in the RADAR images with optically bright mantling units. RADAR images mostly show only dark dunes and the water ice substrate that varies in roughness, fracturing, and porosity. If the rate of deposition of bright aerosol is 0.001-0.01 ??m/yr, the surface would be coated (to optical instruments) in hundreds-to-thousands of years unless cleansing processes are active. The dark dunes must be mobile on this very short timescale to prevent the accumulation of bright coatings. Huygens landed in a region of the VIMS bright and dark blue materials and about 30 km south of the nearest occurrence of dunes visible in the RADAR SAR images. Fluvial/pluvial processes, every few centuries or millennia, must be cleansing the dark floors of the incised channels and scouring the dark plains at the Huygens landing site both imaged by Descent Imager/Spectral Radiometer (DISR). ?? 2007 Elsevier Ltd. All rights reserved.

  5. Geomorphic Processes and Remote Sensing Signatures of Alluvial Fans in the Kun Lun Mountains, China

    NASA Technical Reports Server (NTRS)

    Farr, Tom G.; Chadwick, Oliver A.

    1996-01-01

    The timing of alluvial deposition in arid and semiarid areas is tied to land-surface instability caused by regional climate changes. The distribution pattern of dated deposits provides maps of regional land-surface response to past climate change. Sensitivity to differences in surface roughness and composition makes remote sensing techniques useful for regional mapping of alluvial deposits. Radar images from the Spaceborne Radar Laboratory and visible wavelength images from the French SPOT satellite were used to determine remote sensing signatures of alluvial fan units for an area in the Kun Lun Mountains of northwestern China. These data were combined with field observations to compare surface processes and their effects on remote sensing signatures in northwestern China and the southwestern United States. Geomorphic processes affecting alluvial fans in the two areas include aeolian deposition, desert varnish, and fluvial dissection. However, salt weathering is a much more important process in the Kun Lun than in the southwestern United States. This slows the formation of desert varnish and prevents desert pavement from forming. Thus the Kun Lun signatures are characteristic of the dominance of salt weathering, while signatures from the southwestern United States are characteristic of the dominance of desert varnish and pavement processes. Remote sensing signatures are consistent enough in these two regions to be used for mapping fan units over large areas.

  6. Application of two-dimensional crystallography and image processing to atomic resolution Z-contrast images.

    PubMed

    Morgan, David G; Ramasse, Quentin M; Browning, Nigel D

    2009-06-01

    Zone axis images recorded using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM or Z-contrast imaging) reveal the atomic structure with a resolution that is defined by the probe size of the microscope. In most cases, the full images contain many sub-images of the crystal unit cell and/or interface structure. Thanks to the repetitive nature of these images, it is possible to apply standard image processing techniques that have been developed for the electron crystallography of biological macromolecules and have been used widely in other fields of electron microscopy for both organic and inorganic materials. These methods can be used to enhance the signal-to-noise present in the original images, to remove distortions in the images that arise from either the instrumentation or the specimen itself and to quantify properties of the material in ways that are difficult without such data processing. In this paper, we describe briefly the theory behind these image processing techniques and demonstrate them for aberration-corrected, high-resolution HAADF-STEM images of Si(46) clathrates developed for hydrogen storage.

  7. Design of an MR image processing module on an FPGA chip

    NASA Astrophysics Data System (ADS)

    Li, Limin; Wyrwicz, Alice M.

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.

  8. Design of an MR image processing module on an FPGA chip

    PubMed Central

    Li, Limin; Wyrwicz, Alice M.

    2015-01-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646

  9. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    NASA Astrophysics Data System (ADS)

    Gaona, Enrique

    2003-09-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image.

  10. Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2018-04-01

    This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.

  11. Graphics processing unit-assisted lossless decompression

    DOEpatents

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  12. Image processing applications: From particle physics to society

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-01-01

    We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.

  13. System Integration of FastSPECT III, a Dedicated SPECT Rodent-Brain Imager Based on BazookaSPECT Detector Technology

    PubMed Central

    Miller, Brian W.; Furenlid, Lars R.; Moore, Stephen K.; Barber, H. Bradford; Nagarkar, Vivek V.; Barrett, Harrison H.

    2010-01-01

    FastSPECT III is a stationary, single-photon emission computed tomography (SPECT) imager designed specifically for imaging and studying neurological pathologies in rodent brain, including Alzheimer’s and Parkinsons’s disease. Twenty independent BazookaSPECT [1] gamma-ray detectors acquire projections of a spherical field of view with pinholes selected for desired resolution and sensitivity. Each BazookaSPECT detector comprises a columnar CsI(Tl) scintillator, image-intensifier, optical lens, and fast-frame-rate CCD camera. Data stream back to processing computers via firewire interfaces, and heavy use of graphics processing units (GPUs) ensures that each frame of data is processed in real time to extract the images of individual gamma-ray events. Details of the system design, imaging aperture fabrication methods, and preliminary projection images are presented. PMID:21218137

  14. Practical Implementation of Prestack Kirchhoff Time Migration on a General Purpose Graphics Processing Unit

    NASA Astrophysics Data System (ADS)

    Liu, Guofeng; Li, Chun

    2016-08-01

    In this study, we present a practical implementation of prestack Kirchhoff time migration (PSTM) on a general purpose graphic processing unit. First, we consider the three main optimizations of the PSTM GPU code, i.e., designing a configuration based on a reasonable execution, using the texture memory for velocity interpolation, and the application of an intrinsic function in device code. This approach can achieve a speedup of nearly 45 times on a NVIDIA GTX 680 GPU compared with CPU code when a larger imaging space is used, where the PSTM output is a common reflection point that is gathered as I[ nx][ ny][ nh][ nt] in matrix format. However, this method requires more memory space so the limited imaging space cannot fully exploit the GPU sources. To overcome this problem, we designed a PSTM scheme with multi-GPUs for imaging different seismic data on different GPUs using an offset value. This process can achieve the peak speedup of GPU PSTM code and it greatly increases the efficiency of the calculations, but without changing the imaging result.

  15. Small Interactive Image Processing System (SMIPS) system description

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

  16. Determining the Molecular Growth Mechanisms of Protein Crystal Faces by Atomic Force Microscopy

    NASA Technical Reports Server (NTRS)

    Nadarajah, Arunan; Li, Huayu; Pusey, Marc L.

    1999-01-01

    A high resolution atomic force microscopy (AFM) study had shown that the molecular packing on the tetragonal lysozyme (110) face corresponded to only one of two possible packing arrangements, suggesting that growth layers on this face were of bimolecular height. Theoretical analyses of the packing also indicated that growth of this face should proceed by the addition of growth units of at least tetramer size corresponding to the 43 helices in the crystal. In this study an AFM linescan technique was devised to measure the dimensions of individual growth units on protein crystal faces as they were being incorporated into the lattice. Images of individual growth events on the (110) face of tetragonal lysozyme crystals were observed, shown by jump discontinuities in the growth step in the linescan images as shown in the figure. The growth unit dimension in the scanned direction was obtained from these images. A large number of scans in two directions on the (110) face were performed and the distribution of lysozyme growth unit sizes were obtained. A variety of unit sizes corresponding to 43 helices, were shown to participate in the growth process, with the 43 tetramer being the minimum observed size. This technique represents a new application for AFM allowing time resolved studies of molecular process to be carried out.

  17. Real-time acquisition and display of flow contrast using speckle variance optical coherence tomography in a graphics processing unit.

    PubMed

    Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V

    2014-02-01

    In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.

  18. Determining the Molecular Growth Mechanisms of Protein Crystal faces by Atomic Force Microscopy

    NASA Technical Reports Server (NTRS)

    Li, Huayu; Nadarajah, Arunan; Pusey, Marc L.

    1998-01-01

    A high resolution atomic force microscopy (AFM) study had shown that the molecular packing on the tetragonal lysozyme (110) face corresponded to only one of two possible packing arrangements, suggesting that growth layers on this face were of bimolecular height (Li et al., 1998). Theoretical analyses of the packing had also indicated that growth of this face should proceed by the addition of growth units of at least tetramer size corresponding to the 43 helices in the crystal. In this study an AFM linescan technique was devised to measure the dimensions of individual growth units on protein crystal faces. The growth process of tetragonal lysozyme crystals was slowed down by employing very low supersaturations. As a result images of individual growth events on the (110) face were observed, shown by jump discontinuities in the growth step in the linescan images. The growth unit dimension in the scanned direction was obtained by suitably averaging these images. A large number of scans in two directions on the (110) face were performed and the distribution of lysozyme aggregate sizes were obtained. A variety of growth units, all of which were 43 helical lysozyme aggregates, were shown to participate in the growth process with a 43 tetramer being the minimum observed size. This technique represents a new application for AFM allowing time resolved studies of molecular process to be carried out.

  19. GPU-based prompt gamma ray imaging from boron neutron capture therapy.

    PubMed

    Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae

    2015-01-01

    The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.

  20. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  1. GPU computing in medical physics: a review.

    PubMed

    Pratx, Guillem; Xing, Lei

    2011-05-01

    The graphics processing unit (GPU) has emerged as a competitive platform for computing massively parallel problems. Many computing applications in medical physics can be formulated as data-parallel tasks that exploit the capabilities of the GPU for reducing processing times. The authors review the basic principles of GPU computing as well as the main performance optimization techniques, and survey existing applications in three areas of medical physics, namely image reconstruction, dose calculation and treatment plan optimization, and image processing.

  2. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  3. Tools for a Document Image Utility.

    ERIC Educational Resources Information Center

    Krishnamoorthy, M.; And Others

    1993-01-01

    Describes a project conducted at Rensselaer Polytechnic Institute (New York) that developed methods for automatically subdividing pages from technical journals into smaller semantic units for transmission, display, and further processing in an electronic environment. Topics discussed include optical scanning and image compression, digital image…

  4. TU-F-CAMPUS-J-04: Evaluation of Metal Artifact Reduction Technique for the Radiation Therapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, K; Kuo, H; Ritter, J

    Purpose: To evaluate the feasibility of using a metal artifact reduction technique in depleting metal artifact and its application in improving dose calculation in External Radiation Therapy Planning. Methods: CIRS electron density phantom was scanned with and without steel drill bits placed in some plug holes. Meta artifact reduction software with Metal Deletion Technique (MDT) was used to remove metal artifacts for scanned image with metal. Hounsfield units of electron density plugs from artifact free reference image and MDT processed images were compared. To test the dose calculation improvement after the MDT processed images, clinically approved head and neck planmore » with manual dental artifact correction was tested. Patient images were exported and processed with MDT and plan was recalculated with new MDT image without manual correction. Dose profiles near the metal artifacts were compared. Results: The MDT used in this study effectively reduced the metal artifact caused by beam hardening and scatter. The windmill around the metal drill was greatly improved with smooth rounded view. Difference of the mean HU in each density plug between reference and MDT images were less than 10 HU in most of the plugs. Dose difference between original plan and MDT images were minimal. Conclusion: Most metal artifact reduction methods were developed for diagnostic improvement purpose. Hence Hounsfield unit accuracy was not rigorously tested before. In our test, MDT effectively eliminated metal artifacts with good HU reproduciblity. However, it can introduce new mild artifacts so the MDT images should be checked with original images.« less

  5. A survey of GPU-based medical image computing techniques

    PubMed Central

    Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming

    2012-01-01

    Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080

  6. A History of the Chemical Innovations in Silver-Halide Materials for Color PhotographyIII. Dye Tranfer Process — Instant Color Photography

    NASA Astrophysics Data System (ADS)

    Oishi, Yasushi

    A historical review of the technological developments of instant color photographic process, is presented with emphasis on the innovation processes at the following main turning points: 1) the creation of instant photography by E. H. Land in 1948 (one step processing by transfer of image-forming materials), 2) the advent of instant color photography based on dye developer, by Polaroid Corp., in 1963 (departing from dye-forming development, forming a direct positive preformed-dye image with a negative emulsion, but constraining the sensitive-material designs), 3) the introduction of a color instant product containing redox dye releaser with improved auto-positive emulsion, by Eastman Kodak Co., in 1976 (producing much improved color image quality, freed from the design constraints), and 4) the realization of absolute one-step photography by the integral film- unit system, by Polaroid in 1972. And the patent litigation (1976-86) raised by Polaroid against Kodak allegedly infringing on the integral film-unit patents caused the vast impacts on the industry.

  7. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    NASA Astrophysics Data System (ADS)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  8. Radiologists' preferences for digital mammographic display. The International Digital Mammography Development Group.

    PubMed

    Pisano, E D; Cole, E B; Major, S; Zong, S; Hemminger, B M; Muller, K E; Johnston, R E; Walsh, R; Conant, E; Fajardo, L L; Feig, S A; Nishikawa, R M; Yaffe, M J; Williams, M B; Aylward, S R

    2000-09-01

    To determine the preferences of radiologists among eight different image processing algorithms applied to digital mammograms obtained for screening and diagnostic imaging tasks. Twenty-eight images representing histologically proved masses or calcifications were obtained by using three clinically available digital mammographic units. Images were processed and printed on film by using manual intensity windowing, histogram-based intensity windowing, mixture model intensity windowing, peripheral equalization, multiscale image contrast amplification (MUSICA), contrast-limited adaptive histogram equalization, Trex processing, and unsharp masking. Twelve radiologists compared the processed digital images with screen-film mammograms obtained in the same patient for breast cancer screening and breast lesion diagnosis. For the screening task, screen-film mammograms were preferred to all digital presentations, but the acceptability of images processed with Trex and MUSICA algorithms were not significantly different. All printed digital images were preferred to screen-film radiographs in the diagnosis of masses; mammograms processed with unsharp masking were significantly preferred. For the diagnosis of calcifications, no processed digital mammogram was preferred to screen-film mammograms. When digital mammograms were preferred to screen-film mammograms, radiologists selected different digital processing algorithms for each of three mammographic reading tasks and for different lesion types. Soft-copy display will eventually allow radiologists to select among these options more easily.

  9. Martian Surface Compositions and Spectral Unit Mapping From the Thermal Emission Imaging System

    NASA Astrophysics Data System (ADS)

    Bandfield, J. L.; Christensen, P. R.; Rogers, D.

    2005-12-01

    The Thermal Emission Imaging System (THEMIS) on board the Mars Odyssey spacecraft observes Mars at nine spectral intervals between 6 and 15 microns and at 100 meter spatial sampling. This spectral and spatial resolution allows for mapping of local spectral units and coarse compositional determination of a variety of rock-forming materials such as carbonates, sulfates, and silicates. A number of data processing and atmospheric correction techniques have been developed to ease and speed the interpretation of multispectral THEMIS infrared images. These products and techniques are in the process of being made publicly available via the THEMIS website and were used to produce the results presented here. Spectral variability at kilometer scales in THEMIS data is more common in the southern highlands than in the northern lowlands. Many of the spectral units are associated with a mobile surface layer such as dune fields and mantled dust. However, a number of spectral units appear to be directly tied to the local geologic rock units. These spectral units are commonly associated with crater walls, floors, and ejecta blankets. Other surface compositions are correlated with layered volcanic materials and knobby remnant terrains. Most of the spectral variability observed to date appears to be tied to a variation in silicate mineralogy. Olivine rich units that have been previously reported in Nili Fossae, Ares Valles, and the Valles Marineris region appear to be sparse but common in a number of regions in the southern highlands. Variations in silica content consistent with previously reported global surface units also appear to be present in THEMIS images, allowing for an examination of their local geologic context. Previously reported quartz and feldspar rich exposures in northern Syrtis Major appear more extensive in the region than previously reported. A coherent global and local picture of the mineralogy of the Martian surface is emerging from THEMIS measurements along with other orbital thermal and near infrared spectroscopy measurements from the Mars Express and Mars Global Surveyor spacecraft.

  10. Fast optically sectioned fluorescence HiLo endomicroscopy.

    PubMed

    Ford, Tim N; Lim, Daryl; Mertz, Jerome

    2012-02-01

    We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies.

  11. Fast optically sectioned fluorescence HiLo endomicroscopy

    NASA Astrophysics Data System (ADS)

    Ford, Tim N.; Lim, Daryl; Mertz, Jerome

    2012-02-01

    We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies.

  12. SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).

    PubMed

    Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J

    2012-06-01

    To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.

  13. Geomorphology and Geology of the Southwestern Margaritifer Sinus and Argyre Regions of Mars. Part 1: Geological and Geomorphological Overview

    NASA Technical Reports Server (NTRS)

    Parker, T. J.; Pieri, D. C.

    1985-01-01

    Based upon Viking Orbiter 1 images of the southwestern portion of the Margaritifer Sinus Quadrangle, the northwestern portion of the Argyre Quadrangle, and a small portion of the southeastern Coprates Quadrangle, three major mountainous of plateau units, seven plains units, and six units related to valley forming processes were identified. The photomosaic is oriented such that it provides good areal coverage of the upper Chryse Trough from Argyre Planitia to just above Margaritifer Chaos as well as of plains units on either side of the Trough. The photomosaic was compiled from Viking Orbiter 1 images ranging in resolution from approximately 150 to 300 meters per pixel printed at a scale of about 1:2,000,000. The characteristics of each geomorphic unit are outlined.

  14. GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing

    NASA Astrophysics Data System (ADS)

    Johl, John T.; Baker, Nick C.

    1988-10-01

    The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.

  15. Design of an MR image processing module on an FPGA chip.

    PubMed

    Li, Limin; Wyrwicz, Alice M

    2015-06-01

    We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data

    NASA Astrophysics Data System (ADS)

    O'Connor, A. S.; Justice, B.; Harris, A. T.

    2013-12-01

    Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.

  17. iMAGE cloud: medical image processing as a service for regional healthcare in a hybrid cloud environment.

    PubMed

    Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen

    2016-11-01

    To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.

  18. Spatial resolution recovery utilizing multi-ray tracing and graphic processing unit in PET image reconstruction.

    PubMed

    Liang, Yicheng; Peng, Hao

    2015-02-07

    Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.

  19. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Fast Image Subtraction Using Multi-cores and GPUs

    NASA Astrophysics Data System (ADS)

    Hartung, Steven; Shukla, H.

    2013-01-01

    Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.

  1. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    PubMed

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. Techno-Economic analysis of solar photovoltaic power plant for small scale fish processing in Kota Langsa - a case study

    NASA Astrophysics Data System (ADS)

    Widodo, S. B.; Hamdani; Rizal, T. A.; Pambudi, N. A.

    2018-02-01

    In Langsa, fisheries are the sector leaders by fulfilling a capacity of about 6,050 tons per year and on the other hand, fish-aquaculture reaches 1,200 tons per year on average. The fish processing is conducted through catches and aquaculture. The facilities on which this processing takes place are divided into an ice factory unit, a gutting and cutting unit, a drying unit and a curing unit. However, the energy and electricity costs during the production process has become major constraint because of the increase in the fishermen’s production and income. In this study, the potential and cost-effectiveness of photovoltaic solar power plant to meet the energy demands of fish processing units have been analysed. The energy requirements of fish processing units have reached an estimate of 130 kW, while the proposed design of solar photovoltaic electricity generation is of 200 kW in an area of 0,75 hectares. In this analysis, given the closeness between the location of the processing units and the fish supply auctions, the assumption is made that the photovoltaic plants (OTR) were installed on the roof of the building as compared to the solar power plants (OTL) installed on the outside of the location. The results shows that the levelized cost of OTR instalation is IDR 1.115 per kWh, considering 25 years of plant life-span at 10% of discount rate, with a simple payback period of 13.2 years. OTL levelized energy, on the other hand, is at IDR 997.5 per kWh with a simple payback period of 9.6 years. Blood is an essential component of living creatures in the vascular space. For possible disease identification, it can be tested through a blood test, one of which can be seen from the form of red blood cells. The normal and abnormal morphology of the red blood cells of a patient is very helpful to doctors in detecting a disease. With the advancement of digital image processing technology can be used to identify normal and abnormal blood cells of a patient. This research used self-organizing map method to classify the normal and abnormal form of red blood cells in the digital image. The use of self-organizing map neural network method can be implemented to classify the normal and abnormal form of red blood cells in the input image with 93,78% accuracy testing.

  3. Automated daily quality control analysis for mammography in a multi-unit imaging center.

    PubMed

    Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli

    2018-01-01

    Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.

  4. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing.

    PubMed

    Koprowski, Robert

    2014-07-04

    Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator's (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient's back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects - error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18% for the nose, 10% for the cheeks, and 7% for the forehead. Similarly, when: (7) measuring the anterior eye chamber - there is an error of 20%; (8) measuring the tooth enamel thickness - error of 15%; (9) evaluating the mechanical properties of the cornea during pressure measurement - error of 47%. The paper presents vital, selected issues occurring when assessing the accuracy of designed automatic algorithms for image analysis and processing in bioengineering. The impact of acquisition of images on the problems arising in their analysis has been shown on selected examples. It has also been indicated to which elements of image analysis and processing special attention should be paid in their design.

  5. Classification of hyperspectral imagery using MapReduce on a NVIDIA graphics processing unit (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ramirez, Andres; Rahnemoonfar, Maryam

    2017-04-01

    A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.

  6. GPU-based prompt gamma ray imaging from boron neutron capture therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusions: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.« less

  7. Matrix Sublimation/Recrystallization for Imaging Proteins by Mass Spectrometry at High Spatial Resolution

    PubMed Central

    Yang, Junhai; Caprioli, Richard M.

    2011-01-01

    We have employed matrix deposition by sublimation for protein image analysis on tissue sections using a hydration/recrystallization process that produces high quality MALDI mass spectra and high spatial resolution ion images. We systematically investigated different washing protocols, the effect of tissue section thickness, the amount of sublimated matrix per unit area and different recrystallization conditions. The results show that an organic solvent rinse followed by ethanol/water rinses substantially increased sensitivity for the detection of proteins. Both the thickness of tissue section and amount of sinapinic acid sublimated per unit area have optimal ranges for maximal protein signal intensity. Ion images of mouse and rat brain sections at 50, 20 and 10 µm spatial resolution are presented and are correlated with H&E stained optical images. For targeted analysis, histology directed imaging can be performed using this protocol where MS analysis and H&E staining are performed on the same section. PMID:21639088

  8. Leading Marines in a Digital World

    DTIC Science & Technology

    2013-03-01

    23  2.  Empathy ..............................................................................................25  3.  Healing...Direction Center fMRI Functional Magnetic Resonance Imaging LMX Leader-Member Exchange MCPP Marine Corps Planning Process MRI Magnetic Resonance...Imaging NCO Non Commissioned Officer OCS Officer Candidate School PTSD Post-traumatic Stress Disorder U.S. United States Wyo Wyoming x

  9. Identification of different geologic units using fuzzy constrained resistivity tomography

    NASA Astrophysics Data System (ADS)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  10. Recursive search method for the image elements of functionally defined surfaces

    NASA Astrophysics Data System (ADS)

    Vyatkin, S. I.

    2017-05-01

    This paper touches upon the synthesis of high-quality images in real time and the technique for specifying three-dimensional objects on the basis of perturbation functions. The recursive search method for the image elements of functionally defined objects with the use of graphics processing units is proposed. The advantages of such an approach over the frame-buffer visualization method are shown.

  11. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    PubMed

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  12. Autonomous caregiver following robotic wheelchair

    NASA Astrophysics Data System (ADS)

    Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary

    2011-12-01

    In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.

  13. Fast analytical scatter estimation using graphics processing units.

    PubMed

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  14. Performance assessment of multi-frequency processing of ICU chest images for enhanced visualization of tubes and catheters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.

    2008-03-01

    An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.

  15. TU-FG-BRB-07: GPU-Based Prompt Gamma Ray Imaging From Boron Neutron Capture Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S; Suh, T; Yoon, D

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusion: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray reconstruction using the GPU computation for BNCT simulations.« less

  16. Quantitative analysis of phosphoinositide 3-kinase (PI3K) signaling using live-cell total internal reflection fluorescence (TIRF) microscopy.

    PubMed

    Johnson, Heath E; Haugh, Jason M

    2013-12-02

    This unit focuses on the use of total internal reflection fluorescence (TIRF) microscopy and image analysis methods to study the dynamics of signal transduction mediated by class I phosphoinositide 3-kinases (PI3Ks) in mammalian cells. The first four protocols cover live-cell imaging experiments, image acquisition parameters, and basic image processing and segmentation. These methods are generally applicable to live-cell TIRF experiments. The remaining protocols outline more advanced image analysis methods, which were developed in our laboratory for the purpose of characterizing the spatiotemporal dynamics of PI3K signaling. These methods may be extended to analyze other cellular processes monitored using fluorescent biosensors. Copyright © 2013 John Wiley & Sons, Inc.

  17. Fast Laser Holographic Interferometry For Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Lee, George

    1989-01-01

    Proposed system makes holographic interferograms quickly in wind tunnels. Holograms reveal two-dimensional flows around airfoils and provide information on distributions of pressure, structures of wake and boundary layers, and density contours of flow fields. Holograms form quickly in thermoplastic plates in wind tunnel. Plates rigid and left in place so neither vibrations nor photgraphic-development process degrades accuracy of holograms. System processes and analyzes images quickly. Semiautomatic micro-computer-based desktop image-processing unit now undergoing development moves easily to wind tunnel, and its speed and memory adequate for flows about airfoils.

  18. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  19. In vivo imaging of the neurovascular unit in CNS disease

    PubMed Central

    Merlini, Mario; Davalos, Dimitrios; Akassoglou, Katerina

    2014-01-01

    The neurovascular unit—comprised of glia, pericytes, neurons and cerebrovasculature—is a dynamic interface that ensures physiological central nervous system (CNS) functioning. In disease dynamic remodeling of the neurovascular interface triggers a cascade of responses that determine the extent of CNS degeneration and repair. The dynamics of these processes can be adequately captured by imaging in vivo, which allows the study of cellular responses to environmental stimuli and cell-cell interactions in the living brain in real time. This perspective focuses on intravital imaging studies of the neurovascular unit in stroke, multiple sclerosis (MS) and Alzheimer disease (AD) models and discusses their potential for identifying novel therapeutic targets. PMID:25197615

  20. Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration.

    PubMed

    Sato, Hirochika; Kakue, Takashi; Ichihashi, Yasuyuki; Endo, Yutaka; Wakunami, Koki; Oi, Ryutaro; Yamamoto, Kenji; Nakayama, Hirotaka; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2018-01-24

    Although electro-holography can reconstruct three-dimensional (3D) motion pictures, its computational cost is too heavy to allow for real-time reconstruction of 3D motion pictures. This study explores accelerating colour hologram generation using light-ray information on a ray-sampling (RS) plane with a graphics processing unit (GPU) to realise a real-time holographic display system. We refer to an image corresponding to light-ray information as an RS image. Colour holograms were generated from three RS images with resolutions of 2,048 × 2,048; 3,072 × 3,072 and 4,096 × 4,096 pixels. The computational results indicate that the generation of the colour holograms using multiple GPUs (NVIDIA Geforce GTX 1080) was approximately 300-500 times faster than those generated using a central processing unit. In addition, the results demonstrate that 3D motion pictures were successfully reconstructed from RS images of 3,072 × 3,072 pixels at approximately 15 frames per second using an electro-holographic reconstruction system in which colour holograms were generated from RS images in real time.

  1. An optical processor for object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Sloan, J.; Udomkesmalee, S.

    1987-01-01

    The design and development of a miniaturized optical processor that performs real time image correlation are described. The optical correlator utilizes the Vander Lugt matched spatial filter technique. The correlation output, a focused beam of light, is imaged onto a CMOS photodetector array. In addition to performing target recognition, the device also tracks the target. The hardware, composed of optical and electro-optical components, occupies only 590 cu cm of volume. A complete correlator system would also include an input imaging lens. This optical processing system is compact, rugged, requires only 3.5 watts of operating power, and weighs less than 3 kg. It represents a major achievement in miniaturizing optical processors. When considered as a special-purpose processing unit, it is an attractive alternative to conventional digital image recognition processing. It is conceivable that the combined technology of both optical and ditital processing could result in a very advanced robot vision system.

  2. Frequency domain zero padding for accurate autofocusing based on digital holography

    NASA Astrophysics Data System (ADS)

    Shin, Jun Geun; Kim, Ju Wan; Eom, Tae Joong; Lee, Byeong Ha

    2018-01-01

    The numerical refocusing feature of digital holography enables the reconstruction of a well-focused image from a digital hologram captured at an arbitrary out-of-focus plane without the supervision of end users. However, in general, the autofocusing process for getting a highly focused image requires a considerable computational cost. In this study, to reconstruct a better-focused image, we propose the zero padding technique implemented in the frequency domain. Zero padding in the frequency domain enhances the visibility or numerical resolution of the image, which allows one to measure the degree of focus with more accuracy. A coarse-to-fine search algorithm is used to reduce the computing load, and a graphics processing unit (GPU) is employed to accelerate the process. The performance of the proposed scheme is evaluated with simulation and experiment, and the possibility of obtaining a well-refocused image with an enhanced accuracy and speed are presented.

  3. Spatial dependence of predictions from image segmentation: a methods to determine appropriate scales for producing land-management information

    USDA-ARS?s Scientific Manuscript database

    A challenge in ecological studies is defining scales of observation that correspond to relevant ecological scales for organisms or processes. Image segmentation has been proposed as an alternative to pixel-based methods for scaling remotely-sensed data into ecologically-meaningful units. However, to...

  4. An independent software system for the analysis of dynamic MR images.

    PubMed

    Torheim, G; Lombardi, M; Rinck, P A

    1997-01-01

    A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.

  5. Fast optically sectioned fluorescence HiLo endomicroscopy

    PubMed Central

    Lim, Daryl; Mertz, Jerome

    2012-01-01

    Abstract. We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies. PMID:22463023

  6. 78 FR 32427 - Notice of Issuance of Final Determination Concerning Multifunctional Digital Imaging Systems

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-30

    ... manufacture different types of Controller units. Ricoh considers the manufacturing of the Controller unit... components and subassemblies of the MFPs from China and the Philippines for manufacture in the U.S. and..., and that the entire engineering, development, design and artwork processes for the MFPs took place in...

  7. A National Assessment of Green Infrastructure and Change for the Conterminous United States Using Morphological Image Processing

    EPA Science Inventory

    Green infrastructure is a popular framework for conservation planning. The main elements of green infrastructure are hubs and links. Hubs tend to be large areas of ‘natural’ vegetation and links tend to be linear features (e.g., streams) that connect hubs. Within the United State...

  8. PACS in an intensive care unit: results from a randomized controlled trial

    NASA Astrophysics Data System (ADS)

    Bryan, Stirling; Weatherburn, Gwyneth C.; Watkins, Jessamy; Walker, Samantha; Wright, Carl; Waters, Brian; Evans, Jeff; Buxton, Martin J.

    1998-07-01

    The objective of this research was to assess the costs and benefits associated with the introduction of a small PACS system into an intensive care unit (ICU) at a district general hospital in north Wales. The research design adopted for this study was a single center randomized controlled trial (RCT). Patients were randomly allocated either to a trial arm where their x-ray imaging was solely film-based or to a trial arm where their x-ray imaging was solely PACS based. Benefit measures included examination-based process measures, such as image turn-round time, radiation dose and image unavailability; and patient-related process measures, which included adverse events and length of stay. The measurement of costs focused on additional 'radiological' costs and the costs of patient management. The study recruited 600 patients. The key findings from this study were that the installation of PACS was associated with important benefits in terms of image availability, and important costs in both monetary and radiation dose terms. PACS-related improvements in terms of more timely 'clinical actions' were not found. However, the qualitative aspect of the research found that clinicians were advocates of the technology and believed that an important benefit of PACS related to improved image availability.

  9. GPU-accelerated Kernel Regression Reconstruction for Freehand 3D Ultrasound Imaging.

    PubMed

    Wen, Tiexiang; Li, Ling; Zhu, Qingsong; Qin, Wenjian; Gu, Jia; Yang, Feng; Xie, Yaoqin

    2017-07-01

    Volume reconstruction method plays an important role in improving reconstructed volumetric image quality for freehand three-dimensional (3D) ultrasound imaging. By utilizing the capability of programmable graphics processing unit (GPU), we can achieve a real-time incremental volume reconstruction at a speed of 25-50 frames per second (fps). After incremental reconstruction and visualization, hole-filling is performed on GPU to fill remaining empty voxels. However, traditional pixel nearest neighbor-based hole-filling fails to reconstruct volume with high image quality. On the contrary, the kernel regression provides an accurate volume reconstruction method for 3D ultrasound imaging but with the cost of heavy computational complexity. In this paper, a GPU-based fast kernel regression method is proposed for high-quality volume after the incremental reconstruction of freehand ultrasound. The experimental results show that improved image quality for speckle reduction and details preservation can be obtained with the parameter setting of kernel window size of [Formula: see text] and kernel bandwidth of 1.0. The computational performance of the proposed GPU-based method can be over 200 times faster than that on central processing unit (CPU), and the volume with size of 50 million voxels in our experiment can be reconstructed within 10 seconds.

  10. 31 CFR 240.12 - Processing of checks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false Processing of checks. 240.12 Section... ON THE UNITED STATES TREASURY General Provisions § 240.12 Processing of checks. (a) Federal Reserve... examination and will provide the presenting bank with a copy or image of the check. Such presenting bank must...

  11. 31 CFR 240.12 - Processing of checks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false Processing of checks. 240.12 Section... ON THE UNITED STATES TREASURY General Provisions § 240.12 Processing of checks. (a) Federal Reserve... examination and will provide the presenting bank with a copy or image of the check. Such presenting bank must...

  12. 31 CFR 240.12 - Processing of checks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Processing of checks. 240.12 Section... ON THE UNITED STATES TREASURY General Provisions § 240.12 Processing of checks. (a) Federal Reserve... examination and will provide the presenting bank with a copy or image of the check. Such presenting bank must...

  13. 31 CFR 240.12 - Processing of checks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false Processing of checks. 240.12 Section... ON THE UNITED STATES TREASURY General Provisions § 240.12 Processing of checks. (a) Federal Reserve... examination and will provide the presenting bank with a copy or image of the check. Such presenting bank must...

  14. 31 CFR 240.12 - Processing of checks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false Processing of checks. 240.12 Section... ON THE UNITED STATES TREASURY General Provisions § 240.12 Processing of checks. (a) Federal Reserve... examination and will provide the presenting bank with a copy or image of the check. Such presenting bank must...

  15. Real-time processing for full-range Fourier-domain optical-coherence tomography with zero-filling interpolation using multiple graphic processing units.

    PubMed

    Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi

    2010-09-01

    The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

  16. Interactive boundary delineation of agricultural lands using graphics workstations

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1992-01-01

    A review is presented of the computer-assisted stratification and sampling (CASS) system developed to delineate the boundaries of sample units for survey procedures. CASS stratifies the sampling units by land-cover and land-use type, employing image-processing software and hardware. This procedure generates coverage areas and the boundaries of stratified sampling units that are utilized for subsequent sampling procedures from which agricultural statistics are developed.

  17. A Synthesis of Star Calibration Techniques for Ground-Based Narrowband Electron-Multiplying Charge-Coupled Device Imagers Used in Auroral Photometry

    NASA Technical Reports Server (NTRS)

    Grubbs, Guy II; Michell, Robert; Samara, Marilia; Hampton, Don; Jahn, Jorg-Micha

    2016-01-01

    A technique is presented for the periodic and systematic calibration of ground-based optical imagers. It is important to have a common system of units (Rayleighs or photon flux) for cross comparison as well as self-comparison over time. With the advancement in technology, the sensitivity of these imagers has improved so that stars can be used for more precise calibration. Background subtraction, flat fielding, star mapping, and other common techniques are combined in deriving a calibration technique appropriate for a variety of ground-based imager installations. Spectral (4278, 5577, and 8446 A ) ground-based imager data with multiple fields of view (19, 47, and 180 deg) are processed and calibrated using the techniques developed. The calibration techniques applied result in intensity measurements in agreement between different imagers using identical spectral filtering, and the intensity at each wavelength observed is within the expected range of auroral measurements. The application of these star calibration techniques, which convert raw imager counts into units of photon flux, makes it possible to do quantitative photometry. The computed photon fluxes, in units of Rayleighs, can be used for the absolute photometry between instruments or as input parameters for auroral electron transport models.

  18. A noninvasive technique for real-time detection of bruises in apple surface based on machine vision

    NASA Astrophysics Data System (ADS)

    Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira

    2013-05-01

    Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.

  19. DDGIPS: a general image processing system in robot vision

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Ying, Jun; Ye, Xiuqing; Gu, Weikang

    2000-10-01

    Real-Time Image Processing is the key work in robot vision. With the limitation of the hardware technique, many algorithm-oriented firmware systems were designed in the past. But their architectures were not flexible enough to achieve a multi-algorithm development system. Because of the rapid development of microelectronics technique, many high performance DSP chips and high density FPGA chips have come to life, and this makes it possible to construct a more flexible architecture in real-time image processing system. In this paper, a Double DSP General Image Processing System (DDGIPS) is concerned. We try to construct a two-DSP-based FPGA-computational system with two TMS320C6201s. The TMS320C6x devices are fixed-point processors based on the advanced VLIW CPU, which has eight functional units, including two multipliers and six arithmetic logic units. These features make C6x a good candidate for a general purpose system. In our system, the two TMS320C6201s each has a local memory space, and they also have a shared system memory space which enables them to intercommunicate and exchange data efficiently. At the same time, they can be directly inter-connected in star-shaped architecture. All of these are under the control of a FPGA group. As the core of the system, FPGA plays a very important role: it takes charge of DPS control, DSP communication, memory space access arbitration and the communication between the system and the host machine. And taking advantage of reconfiguring FPGA, all of the interconnection between the two DSP or between DSP and FPGA can be changed. In this way, users can easily rebuild the real-time image processing system according to the data stream and the task of the application and gain great flexibility.

  20. DDGIPS: a general image processing system in robot vision

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Ying, Jun; Ye, Xiuqing; Gu, Weikang

    2000-10-01

    Real-Time Image Processing is the key work in robot vision. With the limitation of the hardware technique, many algorithm-oriented firmware systems were designed in the past. But their architectures were not flexible enough to achieve a multi- algorithm development system. Because of the rapid development of microelectronics technique, many high performance DSP chips and high density FPGA chips have come to life, and this makes it possible to construct a more flexible architecture in real-time image processing system. In this paper, a Double DSP General Image Processing System (DDGIPS) is concerned. We try to construct a two-DSP-based FPGA-computational system with two TMS320C6201s. The TMS320C6x devices are fixed-point processors based on the advanced VLIW CPU, which has eight functional units, including two multipliers and six arithmetic logic units. These features make C6x a good candidate for a general purpose system. In our system, the two TMS320C6210s each has a local memory space, and they also have a shared system memory space which enable them to intercommunicate and exchange data efficiently. At the same time, they can be directly interconnected in star- shaped architecture. All of these are under the control of FPGA group. As the core of the system, FPGA plays a very important role: it takes charge of DPS control, DSP communication, memory space access arbitration and the communication between the system and the host machine. And taking advantage of reconfiguring FPGA, all of the interconnection between the two DSP or between DSP and FPGA can be changed. In this way, users can easily rebuild the real-time image processing system according to the data stream and the task of the application and gain great flexibility.

  1. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash.

    PubMed

    Pelletier, Mathew G

    2008-02-08

    One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU) as an alternative to thePC's traditional use of the central processing unit (CPU). The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit "GPU", for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC's central processing unit "CPU", wasgained. The new parallel algorithm operating on the GPU was able to process a 1024x1024image in less than 17ms. At this improved speed, the image processing system's performance should now be sufficient to provide a system that would be capable of realtimefeed-back control that is in tight cooperation with the cleaning equipment.

  2. A new tool for supervised classification of satellite images available on web servers: Google Maps as a case study

    NASA Astrophysics Data System (ADS)

    García-Flores, Agustín.; Paz-Gallardo, Abel; Plaza, Antonio; Li, Jun

    2016-10-01

    This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.

  3. GPU-Based Real-Time Volumetric Ultrasound Image Reconstruction for a Ring Array

    PubMed Central

    Choe, Jung Woo; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T.

    2014-01-01

    Synthetic phased array (SPA) beamforming with Hadamard coding and aperture weighting is an optimal option for real-time volumetric imaging with a ring array, a particularly attractive geometry in intracardiac and intravascular applications. However, the imaging frame rate of this method is limited by the immense computational load required in synthetic beamforming. For fast imaging with a ring array, we developed graphics processing unit (GPU)-based, real-time image reconstruction software that exploits massive data-level parallelism in beamforming operations. The GPU-based software reconstructs and displays three cross-sectional images at 45 frames per second (fps). This frame rate is 4.5 times higher than that for our previously-developed multi-core CPU-based software. In an alternative imaging mode, it shows one B-mode image rotating about the axis and its maximum intensity projection (MIP), processed at a rate of 104 fps. This paper describes the image reconstruction procedure on the GPU platform and presents the experimental images obtained using this software. PMID:23529080

  4. A New Optical Design for Imaging Spectroscopy

    NASA Astrophysics Data System (ADS)

    Thompson, K. L.

    2002-05-01

    We present an optical design concept for imaging spectroscopy, with some advantages over current systems. The system projects monochromatic images onto the 2-D array detector(s). Faint object and crowded field spectroscopy can be reduced first using image processing techniques, then building the spectrum, unlike integral field units where one must first extract the spectra, build data cubes from these, then reconstruct the target's integrated spectral flux. Like integral field units, all photons are detected simultaneously, unlike tunable filters which must be scanned through the wavelength range of interest and therefore pay a sensitivity pentalty. Several sample designs are presented, including an instrument optimized for measuring intermediate redshift galaxy cluster velocity dispersions, one designed for near-infrared ground-based adaptive optics, and one intended for space-based rapid follow-up of transient point sources such as supernovae and gamma ray bursts.

  5. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  6. PAT: From Western solid dosage forms to Chinese materia medica preparations using NIR-CI.

    PubMed

    Zhou, Luwei; Xu, Manfei; Wu, Zhisheng; Shi, Xinyuan; Qiao, Yanjiang

    2016-01-01

    Near-infrared chemical imaging (NIR-CI) is an emerging technology that combines traditional near-infrared spectroscopy with chemical imaging. Therefore, NIR-CI can extract spectral information from pharmaceutical products and simultaneously visualize the spatial distribution of chemical components. The rapid and non-destructive features of NIR-CI make it an attractive process analytical technology (PAT) for identifying and monitoring critical control parameters during the pharmaceutical manufacturing process. This review mainly focuses on the pharmaceutical applications of NIR-CI in each unit operation during the manufacturing processes, from the Western solid dosage forms to the Chinese materia medica preparations. Finally, future applications of chemical imaging in the pharmaceutical industry are discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  7. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  8. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  9. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  10. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  11. 7 CFR 1219.15 - Industry information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... efficiency in processing, enhance the development of new markets and marketing strategies, increase marketing efficiency, and enhance the image of Hass avocados and the Hass avocado industry in the United States. ...

  12. 7 CFR 1219.15 - Industry information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... efficiency in processing, enhance the development of new markets and marketing strategies, increase marketing efficiency, and enhance the image of Hass avocados and the Hass avocado industry in the United States. ...

  13. 7 CFR 1219.15 - Industry information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... efficiency in processing, enhance the development of new markets and marketing strategies, increase marketing efficiency, and enhance the image of Hass avocados and the Hass avocado industry in the United States. ...

  14. 7 CFR 1219.15 - Industry information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... efficiency in processing, enhance the development of new markets and marketing strategies, increase marketing efficiency, and enhance the image of Hass avocados and the Hass avocado industry in the United States. ...

  15. Garment Counting in a Textile Warehouse by Means of a Laser Imaging System

    PubMed Central

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-01-01

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760

  16. Garment counting in a textile warehouse by means of a laser imaging system.

    PubMed

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-04-29

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.

  17. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  18. Hand portable thin-layer chromatography system

    DOEpatents

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.

    2000-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  19. Illumination box and camera system

    DOEpatents

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  20. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    PubMed

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  1. Real time mitigation of atmospheric turbulence in long distance imaging using the lucky region fusion algorithm with FPGA and GPU hardware acceleration

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher Robert

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.

  2. SPEKTROP DPU: optoelectronic platform for fast multispectral imaging

    NASA Astrophysics Data System (ADS)

    Graczyk, Rafal; Sitek, Piotr; Stolarski, Marcin

    2010-09-01

    In recent years it easy to spot and increasing need of high-quality Earth imaging in airborne and space applications. This is due fact that government and local authorities urge for up to date topological data for administrative purposes. On the other hand, interest in environmental sciences, push for ecological approach, efficient agriculture and forests management are also heavily supported by Earth images in various resolutions and spectral ranges. "SPEKTROP DPU: Opto-electronic platform for fast multi-spectral imaging" paper describes architectural datails of data processing unit, part of universal and modular platform that provides high quality imaging functionality in aerospace applications.

  3. Laser Speckle Imaging of Cerebral Blood Flow

    NASA Astrophysics Data System (ADS)

    Luo, Qingming; Jiang, Chao; Li, Pengcheng; Cheng, Haiying; Wang, Zhen; Wang, Zheng; Tuchin, Valery V.

    Monitoring the spatio-temporal characteristics of cerebral blood flow (CBF) is crucial for studying the normal and pathophysiologic conditions of brain metabolism. By illuminating the cortex with laser light and imaging the resulting speckle pattern, relative CBF images with tens of microns spatial and millisecond temporal resolution can be obtained. In this chapter, a laser speckle imaging (LSI) method for monitoring dynamic, high-resolution CBF is introduced. To improve the spatial resolution of current LSI, a modified LSI method is proposed. To accelerate the speed of data processing, three LSI data processing frameworks based on graphics processing unit (GPU), digital signal processor (DSP), and field-programmable gate array (FPGA) are also presented. Applications for detecting the changes in local CBF induced by sensory stimulation and thermal stimulation, the influence of a chemical agent on CBF, and the influence of acute hyperglycemia following cortical spreading depression on CBF are given.

  4. Intershot Analysis of Flows in DIII-D

    NASA Astrophysics Data System (ADS)

    Meyer, W. H.; Allen, S. L.; Samuell, C. M.; Howard, J.

    2016-10-01

    Analysis of the DIII-D flow diagnostic data require demodulation of interference images, and inversion of the resultant line integrated emissivity and flow (phase) images. Four response matrices are pre-calculated: the emissivity line integral and the line integral of the scalar product of the lines-of-site with the orthogonal unit vectors of parallel flow. Equilibrium data determines the relative weight of the component matrices used in the final flow inversion matrix. Serial processing has been used for the lower divertor viewing flow camera 800x600 pixel image. The full cross section viewing camera will require parallel processing of the 2160x2560 pixel image. We will discuss using a Posix thread pool and a Tesla K40c GPU in the processing of this data. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.

  5. Real-time image dehazing using local adaptive neighborhoods and dark-channel-prior

    NASA Astrophysics Data System (ADS)

    Valderrama, Jesus A.; Díaz-Ramírez, Víctor H.; Kober, Vitaly; Hernandez, Enrique

    2015-09-01

    A real-time algorithm for single image dehazing is presented. The algorithm is based on calculation of local neighborhoods of a hazed image inside a moving window. The local neighborhoods are constructed by computing rank-order statistics. Next the dark-channel-prior approach is applied to the local neighborhoods to estimate the transmission function of the scene. By using the suggested approach there is no need for applying a refining algorithm to the estimated transmission such as the soft matting algorithm. To achieve high-rate signal processing the proposed algorithm is implemented exploiting massive parallelism on a graphics processing unit (GPU). Computer simulation results are carried out to test the performance of the proposed algorithm in terms of dehazing efficiency and speed of processing. These tests are performed using several synthetic and real images. The obtained results are analyzed and compared with those obtained with existing dehazing algorithms.

  6. Graphics Processing Unit-Accelerated Nonrigid Registration of MR Images to CT Images During CT-Guided Percutaneous Liver Tumor Ablations.

    PubMed

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko

    2015-06-01

    Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  7. Hounsfield unit recovery in clinical cone beam CT images of the thorax acquired for image guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto; Hansen, Olfred; Brink, Carsten

    2016-08-01

    A comprehensive artefact correction method for clinical cone beam CT (CBCT) images acquired for image guided radiation therapy (IGRT) on a commercial system is presented. The method is demonstrated to reduce artefacts and recover CT-like Hounsfield units (HU) in reconstructed CBCT images of five lung cancer patients. Projection image based artefact corrections of image lag, detector scatter, body scatter and beam hardening are described and applied to CBCT images of five lung cancer patients. Image quality is evaluated through visual appearance of the reconstructed images, HU-correspondence with the planning CT images, and total volume HU error. Artefacts are reduced and CT-like HUs are recovered in the artefact corrected CBCT images. Visual inspection confirms that artefacts are indeed suppressed by the proposed method, and the HU root mean square difference between reconstructed CBCTs and the reference CT images are reduced by 31% when using the artefact corrections compared to the standard clinical CBCT reconstruction. A versatile artefact correction method for clinical CBCT images acquired for IGRT has been developed. HU values are recovered in the corrected CBCT images. The proposed method relies on post processing of clinical projection images, and does not require patient specific optimisation. It is thus a powerful tool for image quality improvement of large numbers of CBCT images.

  8. The ChemCam Instrument Suite on the Mars Science Laboratory (MSL) Rover: Science Objectives and Mast Unit Description

    USGS Publications Warehouse

    Maurice, S.; Wiens, R.C.; Saccoccio, M.; Barraclough, B.; Gasnault, O.; Forni, O.; Mangold, N.; Baratoux, D.; Bender, S.; Berger, G.; Bernardin, J.; Berthé, M.; Bridges, N.; Blaney, D.; Bouyé, M.; Caïs, P.; Clark, B.; Clegg, S.; Cousin, A.; Cremers, D.; Cros, A.; DeFlores, L.; Derycke, C.; Dingler, B.; Dromart, G.; Dubois, B.; Dupieux, M.; Durand, E.; d'Uston, L.; Fabre, C.; Faure, B.; Gaboriaud, A.; Gharsa, T.; Herkenhoff, K.; Kan, E.; Kirkland, L.; Kouach, D.; Lacour, J.-L.; Langevin, Y.; Lasue, J.; Le Mouélic, S.; Lescure, M.; Lewin, E.; Limonadi, D.; Manhès, G.; Mauchien, P.; McKay, C.; Meslin, P.-Y.; Michel, Y.; Miller, E.; Newsom, Horton E.; Orttner, G.; Paillet, A.; Parès, L.; Parot, Y.; Pérez, R.; Pinet, P.; Poitrasson, F.; Quertier, B.; Sallé, B.; Sotin, Christophe; Sautter, V.; Séran, H.; Simmonds, J.J.; Sirven, J.-B.; Stiglich, R.; Striebig, N.; Thocaven, J.-J.; Toplis, M.J.; Vaniman, D.

    2012-01-01

    ChemCam is a remote sensing instrument suite on board the "Curiosity" rover (NASA) that uses Laser-Induced Breakdown Spectroscopy (LIBS) to provide the elemental composition of soils and rocks at the surface of Mars from a distance of 1.3 to 7 m, and a telescopic imager to return high resolution context and micro-images at distances greater than 1.16 m. We describe five analytical capabilities: rock classification, quantitative composition, depth profiling, context imaging, and passive spectroscopy. They serve as a toolbox to address most of the science questions at Gale crater. ChemCam consists of a Mast-Unit (laser, telescope, camera, and electronics) and a Body-Unit (spectrometers, digital processing unit, and optical demultiplexer), which are connected by an optical fiber and an electrical interface. We then report on the development, integration, and testing of the Mast-Unit, and summarize some key characteristics of ChemCam. This confirmed that nominal or better than nominal performances were achieved for critical parameters, in particular power density (>1 GW/cm2). The analysis spot diameter varies from 350 μm at 2 m to 550 μm at 7 m distance. For remote imaging, the camera field of view is 20 mrad for 1024×1024 pixels. Field tests demonstrated that the resolution (˜90 μrad) made it possible to identify laser shots on a wide variety of images. This is sufficient for visualizing laser shot pits and textures of rocks and soils. An auto-exposure capability optimizes the dynamical range of the images. Dedicated hardware and software focus the telescope, with precision that is appropriate for the LIBS and imaging depths-of-field. The light emitted by the plasma is collected and sent to the Body-Unit via a 6 m optical fiber. The companion to this paper (Wiens et al. this issue) reports on the development of the Body-Unit, on the analysis of the emitted light, and on the good match between instrument performance and science specifications.

  9. Magnetic resonance-guided prostate interventions.

    PubMed

    Haker, Steven J; Mulkern, Robert V; Roebuck, Joseph R; Barnes, Agnieska Szot; Dimaio, Simon; Hata, Nobuhiko; Tempany, Clare M C

    2005-10-01

    We review our experience using an open 0.5-T magnetic resonance (MR) interventional unit to guide procedures in the prostate. This system allows access to the patient and real-time MR imaging simultaneously and has made it possible to perform prostate biopsy and brachytherapy under MR guidance. We review MR imaging of the prostate and its use in targeted therapy, and describe our use of image processing methods such as image registration to further facilitate precise targeting. We describe current developments with a robot assist system being developed to aid radioactive seed placement.

  10. A single FPGA-based portable ultrasound imaging system for point-of-care applications.

    PubMed

    Kim, Gi-Duck; Yoon, Changhan; Kye, Sang-Bum; Lee, Youngbae; Kang, Jeeun; Yoo, Yangmo; Song, Tai-kyong

    2012-07-01

    We present a cost-effective portable ultrasound system based on a single field-programmable gate array (FPGA) for point-of-care applications. In the portable ultrasound system developed, all the ultrasound signal and image processing modules, including an effective 32-channel receive beamformer with pseudo-dynamic focusing, are embedded in an FPGA chip. For overall system control, a mobile processor running Linux at 667 MHz is used. The scan-converted ultrasound image data from the FPGA are directly transferred to the system controller via external direct memory access without a video processing unit. The potable ultrasound system developed can provide real-time B-mode imaging with a maximum frame rate of 30, and it has a battery life of approximately 1.5 h. These results indicate that the single FPGA-based portable ultrasound system developed is able to meet the processing requirements in medical ultrasound imaging while providing improved flexibility for adapting to emerging POC applications.

  11. A new concept for medical imaging centered on cellular phone technology.

    PubMed

    Granot, Yair; Ivorra, Antoni; Rubinsky, Boris

    2008-04-30

    According to World Health Organization reports, some three quarters of the world population does not have access to medical imaging. In addition, in developing countries over 50% of medical equipment that is available is not being used because it is too sophisticated or in disrepair or because the health personnel are not trained to use it. The goal of this study is to introduce and demonstrate the feasibility of a new concept in medical imaging that is centered on cellular phone technology and which may provide a solution to medical imaging in underserved areas. The new system replaces the conventional stand-alone medical imaging device with a new medical imaging system made of two independent components connected through cellular phone technology. The independent units are: a) a data acquisition device (DAD) at a remote patient site that is simple, with limited controls and no image display capability and b) an advanced image reconstruction and hardware control multiserver unit at a central site. The cellular phone technology transmits unprocessed raw data from the patient site DAD and receives and displays the processed image from the central site. (This is different from conventional telemedicine where the image reconstruction and control is at the patient site and telecommunication is used to transmit processed images from the patient site). The primary goal of this study is to demonstrate that the cellular phone technology can function in the proposed mode. The feasibility of the concept is demonstrated using a new frequency division multiplexing electrical impedance tomography system, which we have developed for dynamic medical imaging, as the medical imaging modality. The system is used to image through a cellular phone a simulation of breast cancer tumors in a medical imaging diagnostic mode and to image minimally invasive tissue ablation with irreversible electroporation in a medical imaging interventional mode.

  12. A novel image encryption algorithm using chaos and reversible cellular automata

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Luan, Dapeng

    2013-11-01

    In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.

  13. Development of digital reconstructed radiography software at new treatment facility for carbon-ion beam scanning of National Institute of Radiological Sciences.

    PubMed

    Mori, Shinichiro; Inaniwa, Taku; Kumagai, Motoki; Kuwae, Tsunekazu; Matsuzaki, Yuka; Furukawa, Takuji; Shirai, Toshiyuki; Noda, Koji

    2012-06-01

    To increase the accuracy of carbon ion beam scanning therapy, we have developed a graphical user interface-based digitally-reconstructed radiograph (DRR) software system for use in routine clinical practice at our center. The DRR software is used in particular scenarios in the new treatment facility to achieve the same level of geometrical accuracy at the treatment as at the imaging session. DRR calculation is implemented simply as the summation of CT image voxel values along the X-ray projection ray. Since we implemented graphics processing unit-based computation, the DRR images are calculated with a speed sufficient for the particular clinical practice requirements. Since high spatial resolution flat panel detector (FPD) images should be registered to the reference DRR images in patient setup process in any scenarios, the DRR images also needs higher spatial resolution close to that of FPD images. To overcome the limitation of the CT spatial resolution imposed by the CT voxel size, we applied image processing to improve the calculated DRR spatial resolution. The DRR software introduced here enabled patient positioning with sufficient accuracy for the implementation of carbon-ion beam scanning therapy at our center.

  14. Computational Intelligence for Medical Imaging Simulations.

    PubMed

    Chang, Victor

    2017-11-25

    This paper describes how to simulate medical imaging by computational intelligence to explore areas that cannot be easily achieved by traditional ways, including genes and proteins simulations related to cancer development and immunity. This paper has presented simulations and virtual inspections of BIRC3, BIRC6, CCL4, KLKB1 and CYP2A6 with their outputs and explanations, as well as brain segment intensity due to dancing. Our proposed MapReduce framework with the fusion algorithm can simulate medical imaging. The concept is very similar to the digital surface theories to simulate how biological units can get together to form bigger units, until the formation of the entire unit of biological subject. The M-Fusion and M-Update function by the fusion algorithm can achieve a good performance evaluation which can process and visualize up to 40 GB of data within 600 s. We conclude that computational intelligence can provide effective and efficient healthcare research offered by simulations and visualization.

  15. FluoroSim: A Visual Problem-Solving Environment for Fluorescence Microscopy

    PubMed Central

    Quammen, Cory W.; Richardson, Alvin C.; Haase, Julian; Harrison, Benjamin D.; Taylor, Russell M.; Bloom, Kerry S.

    2010-01-01

    Fluorescence microscopy provides a powerful method for localization of structures in biological specimens. However, aspects of the image formation process such as noise and blur from the microscope's point-spread function combine to produce an unintuitive image transformation on the true structure of the fluorescing molecules in the specimen, hindering qualitative and quantitative analysis of even simple structures in unprocessed images. We introduce FluoroSim, an interactive fluorescence microscope simulator that can be used to train scientists who use fluorescence microscopy to understand the artifacts that arise from the image formation process, to determine the appropriateness of fluorescence microscopy as an imaging modality in an experiment, and to test and refine hypotheses of model specimens by comparing the output of the simulator to experimental data. FluoroSim renders synthetic fluorescence images from arbitrary geometric models represented as triangle meshes. We describe three rendering algorithms on graphics processing units for computing the convolution of the specimen model with a microscope's point-spread function and report on their performance. We also discuss several cases where the microscope simulator has been used to solve real problems in biology. PMID:20431698

  16. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  17. Online measurement for geometrical parameters of wheel set based on structure light and CUDA parallel processing

    NASA Astrophysics Data System (ADS)

    Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie

    2018-01-01

    The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.

  18. Overview of PAT process analysers applicable in monitoring of film coating unit operations for manufacturing of solid oral dosage forms.

    PubMed

    Korasa, Klemen; Vrečer, Franc

    2018-01-01

    Over the last two decades, regulatory agencies have demanded better understanding of pharmaceutical products and processes by implementing new technological approaches, such as process analytical technology (PAT). Process analysers present a key PAT tool, which enables effective process monitoring, and thus improved process control of medicinal product manufacturing. Process analysers applicable in pharmaceutical coating unit operations are comprehensibly described in the present article. The review is focused on monitoring of solid oral dosage forms during film coating in two most commonly used coating systems, i.e. pan and fluid bed coaters. Brief theoretical background and critical overview of process analysers used for real-time or near real-time (in-, on-, at- line) monitoring of critical quality attributes of film coated dosage forms are presented. Besides well recognized spectroscopic methods (NIR and Raman spectroscopy), other techniques, which have made a significant breakthrough in recent years, are discussed (terahertz pulsed imaging (TPI), chord length distribution (CLD) analysis, and image analysis). Last part of the review is dedicated to novel techniques with high potential to become valuable PAT tools in the future (optical coherence tomography (OCT), acoustic emission (AE), microwave resonance (MR), and laser induced breakdown spectroscopy (LIBS)). Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    PubMed

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  20. Real-space post-processing correction of thermal drift and piezoelectric actuator nonlinearities in scanning tunneling microscope images.

    PubMed

    Yothers, Mitchell P; Browder, Aaron E; Bumm, Lloyd A

    2017-01-01

    We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.

  1. Real-space post-processing correction of thermal drift and piezoelectric actuator nonlinearities in scanning tunneling microscope images

    NASA Astrophysics Data System (ADS)

    Yothers, Mitchell P.; Browder, Aaron E.; Bumm, Lloyd A.

    2017-01-01

    We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.

  2. Image reproduction with interactive graphics

    NASA Technical Reports Server (NTRS)

    Buckner, J. D.; Council, H. W.; Edwards, T. R.

    1974-01-01

    Software application or development in optical image digital data processing requires a fast, good quality, yet inexpensive hard copy of processed images. To achieve this, a Cambo camera with an f 2.8/150-mm Xenotar lens in a Copal shutter having a Graflok back for 4 x 5 Polaroid type 57 pack-film has been interfaced to an existing Adage, AGT-30/Electro-Mechanical Research, EMR 6050 graphic computer system. Time-lapse photography in conjunction with a log to linear voltage transformation has resulted in an interactive system capable of producing a hard copy in 54 sec. The interactive aspect of the system lies in a Tektronix 4002 graphic computer terminal and its associated hard copy unit.

  3. Geologic Map of the Sif Mons Quadrangle (V-31), Venus

    USGS Publications Warehouse

    Copp, Duncan L.; Guest, John E.

    2007-01-01

    The Magellan spacecraft orbited Venus from August 10, 1990, until it plunged into the Venusian atmosphere on October 12, 1994. Magellan Mission objectives included (1) improving the knowledge of the geological processes, surface properties, and geologic history of Venus by analysis of surface radar characteristics, topography, and morphology and (2) improving the knowledge of the geophysics of Venus by analysis of Venusian gravity. The Sif Mons quadrangle of Venus includes lat 0? to 25? N. and long 330? to 0? E.; it covers an area of about 8.10 x 106 km2 (fig. 1). The data used to construct the geologic map were from the National Aeronautics and Space Administration (NASA) Magellan Mission. The area is also covered by Arecibo images, which were also consulted (Campbell and Campbell, 1990; Campbell and others, 1989). Data from the Soviet Venera orbiters do not cover this area. All of the SAR products were employed for geologic mapping. C1-MIDRs were used for general recognition of units and structures; F-MIDRs and F-MAPs were used for more specific examination of surface characteristics and structures. Where the highest resolution was required or some image processing was necessary to solve a particular mapping problem, the images were examined using the digital data on CD-ROMs. In cycle 1, the SAR incidence angles for images obtained for the Sif Mons quadrangle ranged from 44? to 46?; in cycle 3, they were between 25? and 26?. We use the term 'high backscatter' of a material unit to imply a rough surface texture at the wavelength scale used by Magellan SAR. Conversely, 'low backscatter' implies a smooth surface. In addition, altimetric, radiometric, and rms slope data were superposed on SAR images. Figure 2 shows altimetry data; figure 3 shows images of ancillary data for the quadrangle; and figure 4 shows backscatter coefficient for selected units. The interpretation of these data was discussed by Ford and others (1989, 1993). For corrected backscatter and numerical ancillary data see tables 1 and 2; these data allow comparison with units at different latitudes on the planet, where the visual appearance may differ because of a different incidence angle. Synthetic stereo images, produced by overlaying SAR images and altimetric data, were of great value in interpreting structures and stratigraphic relations.

  4. A review of GPU-based medical image reconstruction.

    PubMed

    Després, Philippe; Jia, Xun

    2017-10-01

    Tomographic image reconstruction is a computationally demanding task, even more so when advanced models are used to describe a more complete and accurate picture of the image formation process. Such advanced modeling and reconstruction algorithms can lead to better images, often with less dose, but at the price of long calculation times that are hardly compatible with clinical workflows. Fortunately, reconstruction tasks can often be executed advantageously on Graphics Processing Units (GPUs), which are exploited as massively parallel computational engines. This review paper focuses on recent developments made in GPU-based medical image reconstruction, from a CT, PET, SPECT, MRI and US perspective. Strategies and approaches to get the most out of GPUs in image reconstruction are presented as well as innovative applications arising from an increased computing capacity. The future of GPU-based image reconstruction is also envisioned, based on current trends in high-performance computing. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  6. Medusae Fossae

    NASA Technical Reports Server (NTRS)

    2002-01-01

    [figure removed for brevity, see original site] (Released 31 July 2002) This image crosses the equator at about 155 W longitude and shows a sample of the middle member of the Medusae Fossae formation. The layers exposed in the southeast-facing scarp suggest that there is a fairly competent unit underlying the mesa in the center of the image. Dust-avalanches are apparent in the crater depression near the middle of the image. The mesa of Medusae Fossae material has the geomorphic signatures that are typical of the formation elsewhere on Mars, but the surface is probably heavily mantled with fine dust, masking the small-scale character of the unit. The close proximity of the Medusae Fossae unit to the Tharsis region may suggest that it is an ignimbrite or volcanic airfall deposit, but it's eroded character hasn't preserved the primary depositional features that would give away the secrets of formation. One of the most interesting feature in the image is the high-standing knob at the base of the scarp in the lower portion of the image. This knob or butte is high standing because it is composed of material that is not as easily eroded as the rest of the unit. There are a number of possible explanations for this feature, including volcano, inverted crater, or some localized process that caused once friable material to become cemented. Another interesting set of features are the long troughs on the slope in the lower portion of the image. The fact that the features keep the same width for the entire length suggests that these are not simple landslides.

  7. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  8. GOES-R Advanced Base Line Imager Installation

    NASA Image and Video Library

    2016-08-30

    Team members prepare the Advanced Base Line Imager, the primary optical instrument, for installation on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.

  9. GOES-R Advanced Base Line Imager Installation

    NASA Image and Video Library

    2016-08-30

    Team members install the Advanced Base Line Imager, the primary optical instrument, on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.

  10. GOES-R Advanced Base Line Imager Installation

    NASA Image and Video Library

    2016-08-30

    The Advanced Base Line Imager, the primary optical instrument, has been installed on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.

  11. A national assessment of green infrastructure and change for the conterminous United States using morphological image processing

    Treesearch

    J.D Wickham; Kurt H. Riitters; T.G. Wade; P. Vogt

    2010-01-01

    Green infrastructure is a popular framework for conservation planning. The main elements of green infrastructure are hubs and links. Hubs tend to be large areas of ‘natural’ vegetation and links tend to be linear features (e.g., streams) that connect hubs. Within the United States, green infrastructure projects can be characterized as: (...

  12. Images of the Orient: Nineteenth-Century European Travelers to Muslim Lands. A Unit of Study for Grades 9-12.

    ERIC Educational Resources Information Center

    Douglass, Susan L.

    This teaching unit represents a specific "dramatic moment" in history that can allow students to delve into the deeper meanings of selected landmark events and explore their wider context in the great historical narrative. Studying a crucial turning point in history helps students realize that history is an ongoing, open-ended process,…

  13. Color line scan camera technology and machine vision: requirements to consider

    NASA Astrophysics Data System (ADS)

    Paernaenen, Pekka H. T.

    1997-08-01

    Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.

  14. Directional ratio based on parabolic molecules and its application to the analysis of tubular structures

    NASA Astrophysics Data System (ADS)

    Labate, Demetrio; Negi, Pooran; Ozcan, Burcin; Papadakis, Manos

    2015-09-01

    As advances in imaging technologies make more and more data available for biomedical applications, there is an increasing need to develop efficient quantitative algorithms for the analysis and processing of imaging data. In this paper, we introduce an innovative multiscale approach called Directional Ratio which is especially effective to distingush isotropic from anisotropic structures. This task is especially useful in the analysis of images of neurons, the main units of the nervous systems which consist of a main cell body called the soma and many elongated processes called neurites. We analyze the theoretical properties of our method on idealized models of neurons and develop a numerical implementation of this approach for analysis of fluorescent images of cultured neurons. We show that this algorithm is very effective for the detection of somas and the extraction of neurites in images of small circuits of neurons.

  15. Qualitative and quantitative interpretation of SEM image using digital image processing.

    PubMed

    Saladra, Dawid; Kopernik, Magdalena

    2016-10-01

    The aim of the this study is improvement of qualitative and quantitative analysis of scanning electron microscope micrographs by development of computer program, which enables automatic crack analysis of scanning electron microscopy (SEM) micrographs. Micromechanical tests of pneumatic ventricular assist devices result in a large number of micrographs. Therefore, the analysis must be automatic. Tests for athrombogenic titanium nitride/gold coatings deposited on polymeric substrates (Bionate II) are performed. These tests include microshear, microtension and fatigue analysis. Anisotropic surface defects observed in the SEM micrographs require support for qualitative and quantitative interpretation. Improvement of qualitative analysis of scanning electron microscope images was achieved by a set of computational tools that includes binarization, simplified expanding, expanding, simple image statistic thresholding, the filters Laplacian 1, and Laplacian 2, Otsu and reverse binarization. Several modifications of the known image processing techniques and combinations of the selected image processing techniques were applied. The introduced quantitative analysis of digital scanning electron microscope images enables computation of stereological parameters such as area, crack angle, crack length, and total crack length per unit area. This study also compares the functionality of the developed computer program of digital image processing with existing applications. The described pre- and postprocessing may be helpful in scanning electron microscopy and transmission electron microscopy surface investigations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  16. 26 CFR 1.924(e)-1 - Activities relating to the disposition of export property.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... build a favorable image of a company or group of companies is not included in this definition of... in connection with the trade show are treated as United States direct costs. (b) Processing of... processing of customer orders and the arranging for delivery of the export property are defined in paragraph...

  17. 26 CFR 1.924(e)-1 - Activities relating to the disposition of export property.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... build a favorable image of a company or group of companies is not included in this definition of... in connection with the trade show are treated as United States direct costs. (b) Processing of... processing of customer orders and the arranging for delivery of the export property are defined in paragraph...

  18. 26 CFR 1.924(e)-1 - Activities relating to the disposition of export property.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... build a favorable image of a company or group of companies is not included in this definition of... in connection with the trade show are treated as United States direct costs. (b) Processing of... processing of customer orders and the arranging for delivery of the export property are defined in paragraph...

  19. Intelligent identification of remnant ridge edges in region west of Yongxing Island, South China Sea

    NASA Astrophysics Data System (ADS)

    Wang, Weiwei; Guo, Jing; Cai, Guanqiang; Wang, Dawei

    2018-02-01

    Edge detection enables identification of geomorphologic unit boundaries and thus assists with geomorphical mapping. In this paper, an intelligent edge identification method is proposed and image processing techniques are applied to multi-beam bathymetry data. To accomplish this, a color image is generated by the bathymetry, and a weighted method is used to convert the color image to a gray image. As the quality of the image has a significant influence on edge detection, different filter methods are applied to the gray image for de-noising. The peak signal-to-noise ratio and mean square error are calculated to evaluate which filter method is most appropriate for depth image filtering and the edge is subsequently detected using an image binarization method. Traditional image binarization methods cannot manage the complicated uneven seafloor, and therefore a binarization method is proposed that is based on the difference between image pixel values; the appropriate threshold for image binarization is estimated according to the probability distribution of pixel value differences between two adjacent pixels in horizontal and vertical directions, respectively. Finally, an eight-neighborhood frame is adopted to thin the binary image, connect the intermittent edge, and implement contour extraction. Experimental results show that the method described here can recognize the main boundaries of geomorphologic units. In addition, the proposed automatic edge identification method avoids use of subjective judgment, and reduces time and labor costs.

  20. Sensor and computing resource management for a small satellite

    NASA Astrophysics Data System (ADS)

    Bhatia, Abhilasha; Goehner, Kyle; Sand, John; Straub, Jeremy; Mohammad, Atif; Korvald, Christoffer; Nervold, Anders Kose

    A small satellite in a low-Earth orbit (e.g., approximately a 300 to 400 km altitude) has an orbital velocity in the range of 8.5 km/s and completes an orbit approximately every 90 minutes. For a satellite with minimal attitude control, this presents a significant challenge in obtaining multiple images of a target region. Presuming an inclination in the range of 50 to 65 degrees, a limited number of opportunities to image a given target or communicate with a given ground station are available, over the course of a 24-hour period. For imaging needs (where solar illumination is required), the number of opportunities is further reduced. Given these short windows of opportunity for imaging, data transfer, and sending commands, scheduling must be optimized. In addition to the high-level scheduling performed for spacecraft operations, payload-level scheduling is also required. The mission requires that images be post-processed to maximize spatial resolution and minimize data transfer (through removing overlapping regions). The payload unit includes GPS and inertial measurement unit (IMU) hardware to aid in image alignment for the aforementioned. The payload scheduler must, thus, split its energy and computing-cycle budgets between determining an imaging sequence (required to capture the highly-overlapping data required for super-resolution and adjacent areas required for mosaicking), processing the imagery (to perform the super-resolution and mosaicking) and preparing the data for transmission (compressing it, etc.). This paper presents an approach for satellite control, scheduling and operations that allows the cameras, GPS and IMU to be used in conjunction to acquire higher-resolution imagery of a target region.

  1. Vision based tunnel inspection using non-rigid registration

    NASA Astrophysics Data System (ADS)

    Badshah, Amir; Ullah, Shan; Shahzad, Danish

    2015-04-01

    Growing numbers of long tunnels across the globe has increased the need for safety measurements and inspections of tunnels in these days. To avoid serious damages, tunnel inspection is highly recommended at regular intervals of time to find any deformations or cracks at the right time. While following the stringent safety and tunnel accessibility standards, conventional geodetic surveying using techniques of civil engineering and other manual and mechanical methods are time consuming and results in troublesome of routine life. An automatic tunnel inspection by image processing techniques using non rigid registration has been proposed. There are many other image processing methods used for image registration purposes. Most of the processes are operation of images in its spatial domain like finding edges and corners by Harris edge detection method. These methods are quite time consuming and fail for some or other reasons like for blurred or images with noise. Due to use of image features directly by these methods in the process, are known by the group, correlation by image features. The other method is featureless correlation, in which the images are converted into its frequency domain and then correlated with each other. The shift in spatial domain is the same as in frequency domain, but the processing is order faster than in spatial domain. In the proposed method modified normalized phase correlation has been used to find any shift between two images. As pre pre-processing the tunnel images i.e. reference and template are divided into small patches. All these relative patches are registered by the proposed modified normalized phase correlation. By the application of the proposed algorithm we get the pixel movement of the images. And then these pixels shifts are converted to measuring units like mm, cm etc. After the complete process if there is any shift in the tunnel at described points are located.

  2. GOES-R Advanced Base Line Imager Installation

    NASA Image and Video Library

    2016-08-30

    Team members assist as a crane lifts the Advanced Base Line Imager, the primary optical instrument, for installation on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.

  3. GOES-R Advanced Base Line Imager Installation

    NASA Image and Video Library

    2016-08-30

    Team members assist as a crane moves the Advanced Base Line Imager, the primary optical instruments, for installation on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.

  4. On the possibility of producing true real-time retinal cross-sectional images using a graphics processing unit enhanced master-slave optical coherence tomography system.

    PubMed

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2015-07-01

    In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.

  5. In-vivo gingival sulcus imaging using full-range, complex-conjugate-free, endoscopic spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.

    2012-01-01

    Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.

  6. Fast Occlusion and Shadow Detection for High Resolution Remote Sensing Image Combined with LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Hu, X.; Li, X.

    2012-08-01

    The orthophoto is an important component of GIS database and has been applied in many fields. But occlusion and shadow causes the loss of feature information which has a great effect on the quality of images. One of the critical steps in true orthophoto generation is the detection of occlusion and shadow. Nowadays LiDAR can obtain the digital surface model (DSM) directly. Combined with this technology, image occlusion and shadow can be detected automatically. In this paper, the Z-Buffer is applied for occlusion detection. The shadow detection can be regarded as a same problem with occlusion detection considering the angle between the sun and the camera. However, the Z-Buffer algorithm is computationally expensive. And the volume of scanned data and remote sensing images is very large. Efficient algorithm is another challenge. Modern graphics processing unit (GPU) is much more powerful than central processing unit (CPU). We introduce this technology to speed up the Z-Buffer algorithm and get 7 times increase in speed compared with CPU. The results of experiments demonstrate that Z-Buffer algorithm plays well in occlusion and shadow detection combined with high density of point cloud and GPU can speed up the computation significantly.

  7. Retinal angiography with real-time speckle variance optical coherence tomography.

    PubMed

    Xu, Jing; Han, Sherry; Balaratnasingam, Chandrakumar; Mammo, Zaid; Wong, Kevin S K; Lee, Sieun; Cua, Michelle; Young, Mei; Kirker, Andrew; Albiani, David; Forooghian, Farzin; Mackenzie, Paul; Merkur, Andrew; Yu, Dao-Yi; Sarunic, Marinko V

    2015-10-01

    This report describes a novel, non-invasive and label-free optical imaging technique, speckle variance optical coherence tomography (svOCT), for visualising blood flow within human retinal capillary networks. This imaging system uses a custom-built swept source OCT system operating at a line rate of 100 kHz. Real-time processing and visualisation is implemented on a consumer grade graphics processing unit. To investigate the quality of microvascular detail acquired with this device we compared images of human capillary networks acquired with svOCT and fluorescein angiography. We found that the density of capillary microvasculature acquired with this svOCT device was visibly greater than fluorescein angiography. We also found that this svOCT device had the capacity to generate en face images of distinct capillary networks that are morphologically comparable with previously published histological studies. Finally, we found that this svOCT device has the ability to non-invasively illustrate the common manifestations of diabetic retinopathy and retinal vascular occlusion. The results of this study suggest that graphics processing unit accelerated svOCT has the potential to non-invasively provide useful quantitative information about human retinal capillary networks. Therefore svOCT may have clinical and research applications for the management of retinal microvascular diseases, which are a major cause of visual morbidity worldwide. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Large-Scale Image Analytics Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.

    2014-12-01

    High resolution land cover classification maps are needed to increase the accuracy of current Land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agricultural Imaging Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with ~60 million pixels) and has a total size of ~100 terabytes for a single acquisition. Features extracted from the entire dataset would amount to ~8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. In order to perform image analytics in such a granular system, it is mandatory to devise an intelligent archiving and query system for image retrieval, file structuring, metadata processing and filtering of all available image scenes. Using the Open NASA Earth Exchange (NEX) initiative, which is a partnership with Amazon Web Services (AWS), we have developed an end-to-end architecture for designing the database and the deep belief network (following the distbelief computing model) to solve a grand challenge of scaling this process across quarter million NAIP tiles that cover the entire Continental United States. The AWS core components that we use to solve this problem are DynamoDB along with S3 for database query and storage, ElastiCache shared memory architecture for image segmentation, Elastic Map Reduce (EMR) for image feature extraction, and the memory optimized Elastic Cloud Compute (EC2) for the learning algorithm.

  9. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  10. Combining satellite imagery with forest inventory data to assess damage severity following a major blowdown event in northern Minnesota, USA

    Treesearch

    Mark D. Nelson; Sean P. Healey; W. Keith Moser; Mark H. Hansen

    2009-01-01

    Effects of a catastrophic blowdown event in northern Minnesota, USA were assessed using field inventory data, aerial sketch maps and satellite image data processed through the North American Forest Dynamics programme. Estimates were produced for forest area and net volume per unit area of live trees pre- and post-disturbance, and for changes in volume per unit area and...

  11. Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging

    PubMed Central

    Izquierdo, Alberto; Villacorta, Juan José; del Val Puente, Lara; Suárez, Luis

    2016-01-01

    This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system. PMID:27727174

  12. An in situ probe for on-line monitoring of cell density and viability on the basis of dark field microscopy in conjunction with image processing and supervised machine learning.

    PubMed

    Wei, Ning; You, Jia; Friehs, Karl; Flaschel, Erwin; Nattkemper, Tim Wilhelm

    2007-08-15

    Fermentation industries would benefit from on-line monitoring of important parameters describing cell growth such as cell density and viability during fermentation processes. For this purpose, an in situ probe has been developed, which utilizes a dark field illumination unit to obtain high contrast images with an integrated CCD camera. To test the probe, brewer's yeast Saccharomyces cerevisiae is chosen as the target microorganism. Images of the yeast cells in the bioreactors are captured, processed, and analyzed automatically by means of mechatronics, image processing, and machine learning. Two support vector machine based classifiers are used for separating cells from background, and for distinguishing live from dead cells afterwards. The evaluation of the in situ experiments showed strong correlation between results obtained by the probe and those by widely accepted standard methods. Thus, the in situ probe has been proved to be a feasible device for on-line monitoring of both cell density and viability with high accuracy and stability. (c) 2007 Wiley Periodicals, Inc.

  13. Grooved Terrain on Ganymede: First Results from Galileo High-Resolution Imaging

    USGS Publications Warehouse

    Pappalardo, R.T.; Head, J.W.; Collins, G.C.; Kirk, R.L.; Neukum, G.; Oberst, J.; Giese, B.; Greeley, R.; Chapman, C.R.; Helfenstein, P.; Moore, Johnnie N.; McEwen, A.; Tufts, B.R.; Senske, D.A.; Herbert, Breneman H.; Klaasen, K.

    1998-01-01

    High-resolution Galileo imaging has provided important insight into the origin and evolution of grooved terrain on Ganymede. The Uruk Sulcus target site was the first imaged at high resolution, and considerations of resolution, viewing geometry, low image compression, and complementary stereo imaging make this region extremely informative. Contrast variations in these low-incidence angle images are extreme and give the visual impression of topographic shading. However, photometric analysis shows that the scene must owe its character to albedo variations. A close correlation of albedo variations to topography is demonstrated by limited stereo coverage, allowing extrapolation of the observed brightness and topographic relationships to the rest of the imaged area. Distinct geological units are apparent across the region, and ridges and grooves are ubiquitous within these units. The stratigraphically lowest and most heavily cratered units ("lineated grooved terrain") generally show morphologies indicative of horst-and-graben-style normal faulting. The stratigraphically highest groove lanes ("parallel ridged terrain") exhibit ridges of roughly triangular cross section, suggesting that tilt-block-style normal faulting has shaped them. These extensional-tectonic models are supported by crosscutting relationships at the margins of groove lanes. Thus, a change in tectonic style with time is suggested in the Uruk Sulcus region, varying from horst and graben faulting for the oldest grooved terrain units to tilt block normal faulting for the latest units. The morphologies and geometries of some stratigraphically high units indicate that a strike-slip component of deformation has played an important role in shaping this region of grooved terrain. The most recent tectonic episode is interpreted as right-lateral transtension, with its tectonic pattern of two contemporaneous structural orientations superimposed on older units of grooved terrain. There is little direct evidence for cryovolcanic resurfacing in the Uruk Sulcus region; instead tectonism appears to be the dominant geological process that has shaped the terrain. A broad wavelength of deformation is indicated, corresponding to the Voyager-observed topography, and may be the result of ductile necking of the lithosphere, while a finer scale of deformation probably reflects faulting of the brittle near surface. The results here form a basis against which other Galileo grooved terrain observations can be compared. ?? 1998 Academic Press.

  14. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George [Maple Valley, WA; Afromowitz, Martin A [Mercer Island, WA; Hogle, Richard E [Olympia, WA

    2008-10-14

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions of about 4 or 8.7 microns and directly producing images of the interior of the boiler, or producing feeding signals to a data processing system for information to enable a distributed control system by which the boilers are operated to operate said boilers more efficiently. The data processing system includes an image pre-processing circuit in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. It also includes an image compensation system for array compensation to correct for pixel variation and dead cells, etc., and for correcting geometric distortion. An image segmentation module receives a cleaned image from the image pre-processing circuit for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. It also accomplishes thresholding/clustering on gray scale/texture and makes morphological transforms to smooth regions, and identifies regions by connected components. An image-understanding unit receives a segmented image sent from the image segmentation module and matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system for more efficient operation of the plant pendant tube cleaning and operating systems.

  15. Evaluating Mobile Graphics Processing Units (GPUs) for Real-Time Resource Constrained Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meredith, J; Conger, J; Liu, Y

    2005-11-11

    Modern graphics processing units (GPUs) can provide tremendous performance boosts for some applications beyond what a single CPU can accomplish, and their performance is growing at a rate faster than CPUs as well. Mobile GPUs available for laptops have the small form factor and low power requirements suitable for use in embedded processing. We evaluated several desktop and mobile GPUs and CPUs on traditional and non-traditional graphics tasks, as well as on the most time consuming pieces of a full hyperspectral imaging application. Accuracy remained high despite small differences in arithmetic operations like rounding. Performance improvements are summarized here relativemore » to a desktop Pentium 4 CPU.« less

  16. Hyperspectral processing in graphical processing units

    NASA Astrophysics Data System (ADS)

    Winter, Michael E.; Winter, Edwin M.

    2011-06-01

    With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.

  17. Assessment of mammographic film processor performance in a hospital and mobile screening unit.

    PubMed

    Murray, J G; Dowsett, D J; Laird, O; Ennis, J T

    1992-12-01

    In contrast to the majority of mammographic breast screening programmes, film processing at this centre occurs on site in both hospital and mobile trailer units. Initial (1989) quality control (QC) sensitometric tests revealed a large variation in film processor performance in the mobile unit. The clinical significance of these variations was assessed and acceptance limits for processor performance determined. Abnormal mammograms were used as reference material and copied using high definition 35 mm film over a range of exposure settings. The copies were than matched with QC film density variation from the mobile unit. All films were subsequently ranked for spatial and contrast resolution. Optimal values for processing time of 2 min (equivalent to film transit time 3 min and developer time 46 s) and temperature of 36 degrees C were obtained. The widespread anomaly of reporting film transit time as processing time is highlighted. Use of mammogram copies as a means of measuring the influence of film processor variation is advocated. Careful monitoring of the mobile unit film processor performance has produced stable quality comparable with the hospital based unit. The advantages of on site film processing are outlined. The addition of a sensitometric step wedge to all mammography film stock as a means of assessing image quality is recommended.

  18. The ``False Colour'' Problem

    NASA Astrophysics Data System (ADS)

    Serra, Jean

    The emergence of new data in multidimensional function lattices is studied. A typical example is the apparition of false colours when (R,G,B) images are processed. Two lattice models are specially analysed. Firstly, one considers a mixture of total and marginal orderings where the variations of some components are governed by other ones. This constraint yields the “pilot lattices”. The second model is a cylindrical polar representation in n dimensions. In this model, data that are distributed on the unit sphere of n - 1 dimensions need to be ordered. The proposed orders, and lattices are specific to each image. They are obtained from Voronoi tesselation of the unit sphere The case of four dimensions is treated in detail and illustrated.

  19. GPU-Based High-performance Imaging for Mingantu Spectral RadioHeliograph

    NASA Astrophysics Data System (ADS)

    Mei, Ying; Wang, Feng; Wang, Wei; Chen, Linjie; Liu, Yingbo; Deng, Hui; Dai, Wei; Liu, Cuiyin; Yan, Yihua

    2018-01-01

    As a dedicated solar radio interferometer, the MingantU SpEctral RadioHeliograph (MUSER) generates massive observational data in the frequency range of 400 MHz-15 GHz. High-performance imaging forms a significantly important aspect of MUSER’s massive data processing requirements. In this study, we implement a practical high-performance imaging pipeline for MUSER data processing. At first, the specifications of the MUSER are introduced and its imaging requirements are analyzed. Referring to the most commonly used radio astronomy software such as CASA and MIRIAD, we then implement a high-performance imaging pipeline based on the Graphics Processing Unit technology with respect to the current operational status of the MUSER. A series of critical algorithms and their pseudo codes, i.e., detection of the solar disk and sky brightness, automatic centering of the solar disk and estimation of the number of iterations for clean algorithms, are proposed in detail. The preliminary experimental results indicate that the proposed imaging approach significantly increases the processing performance of MUSER and generates images with high-quality, which can meet the requirements of the MUSER data processing. Supported by the National Key Research and Development Program of China (2016YFE0100300), the Joint Research Fund in Astronomy (No. U1531132, U1631129, U1231205) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and the Chinese Academy of Sciences (CAS), the National Natural Science Foundation of China (Nos. 11403009 and 11463003).

  20. Multifaceted free-space image distributor for optical interconnects in massively parrallel processing

    NASA Astrophysics Data System (ADS)

    Zhao, Feng; Frietman, Edward E. E.; Han, Zhong; Chen, Ray T.

    1999-04-01

    A characteristic feature of a conventional von Neumann computer is that computing power is delivered by a single processing unit. Although increasing the clock frequency improves the performance of the computer, the switching speed of the semiconductor devices and the finite speed at which electrical signals propagate along the bus set the boundaries. Architectures containing large numbers of nodes can solve this performance dilemma, with the comment that main obstacles in designing such systems are caused by difficulties to come up with solutions that guarantee efficient communications among the nodes. Exchanging data becomes really a bottleneck should al nodes be connected by a shared resource. Only optics, due to its inherent parallelism, could solve that bottleneck. Here, we explore a multi-faceted free space image distributor to be used in optical interconnects in massively parallel processing. In this paper, physical and optical models of the image distributor are focused on from diffraction theory of light wave to optical simulations. the general features and the performance of the image distributor are also described. The new structure of an image distributor and the simulations for it are discussed. From the digital simulation and experiment, it is found that the multi-faceted free space image distributing technique is quite suitable for free space optical interconnection in massively parallel processing and new structure of the multifaceted free space image distributor would perform better.

  1. Apparatus and method for imaging metallic objects using an array of giant magnetoresistive sensors

    DOEpatents

    Chaiken, Alison

    2000-01-01

    A portable, low-power, metallic object detector and method for providing an image of a detected metallic object. In one embodiment, the present portable low-power metallic object detector an array of giant magnetoresistive (GMR) sensors. The array of GMR sensors is adapted for detecting the presence of and compiling image data of a metallic object. In the embodiment, the array of GMR sensors is arranged in a checkerboard configuration such that axes of sensitivity of alternate GMR sensors are orthogonally oriented. An electronics portion is coupled to the array of GMR sensors. The electronics portion is adapted to receive and process the image data of the metallic object compiled by the array of GMR sensors. The embodiment also includes a display unit which is coupled to the electronics portion. The display unit is adapted to display a graphical representation of the metallic object detected by the array of GMR sensors. In so doing, a graphical representation of the detected metallic object is provided.

  2. Method for 3D noncontact measurements of cut trees package area

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Vizilter, Yuri V.

    2001-02-01

    Progress in imaging sensors and computers create the background for numerous 3D imaging application for wide variety of manufacturing activity. Many demands for automated precise measurements are in wood branch of industry. One of them is the accurate volume definition for cut trees carried on the truck. The key point for volume estimation is determination of the front area of the cut tree package. To eliminate slow and inaccurate manual measurements being now in practice the experimental system for automated non-contact wood measurements is developed. The system includes two non-metric CCD video cameras, PC as central processing unit, frame grabbers and original software for image processing and 3D measurements. The proposed method of measurement is based on capturing the stereo pair of front of trees package and performing the image orthotranformation into the front plane. This technique allows to process transformed image for circle shapes recognition and calculating their area. The metric characteristics of the system are provided by special camera calibration procedure. The paper presents the developed method of 3D measurements, describes the hardware used for image acquisition and the software realized the developed algorithms, gives the productivity and precision characteristics of the system.

  3. Focus measure method based on the modulus of the gradient of the color planes for digital microscopy

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel

    2018-02-01

    The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.

  4. Imaging-Assisted Large-Format Breast Pathology: Program Rationale and Development in a Nonprofit Health System in the United States

    PubMed Central

    Tucker, F. Lee

    2012-01-01

    Modern breast imaging, including magnetic resonance imaging, provides an increasingly clear depiction of breast cancer extent, often with suboptimal pathologic confirmation. Pathologic findings guide management decisions, and small increments in reported tumor characteristics may rationalize significant changes in therapy and staging. Pathologic techniques to grossly examine resected breast tissue have changed little during this era of improved breast imaging and still rely primarily on the techniques of gross inspection and specimen palpation. Only limited imaging information is typically conveyed to pathologists, typically in the form of wire-localization images from breast-conserving procedures. Conventional techniques of specimen dissection and section submission destroy the three-dimensional integrity of the breast anatomy and tumor distribution. These traditional methods of breast specimen examination impose unnecessary limitations on correlation with imaging studies, measurement of cancer extent, multifocality, and margin distance. Improvements in pathologic diagnosis, reporting, and correlation of breast cancer characteristics can be achieved by integrating breast imagers into the specimen examination process and the use of large-format sections which preserve local anatomy. This paper describes the successful creation of a large-format pathology program to routinely serve all patients in a busy interdisciplinary breast center associated with a community-based nonprofit health system in the United States. PMID:23316372

  5. Extraction of lead and ridge characteristics from SAR images of sea ice

    NASA Technical Reports Server (NTRS)

    Vesecky, John F.; Smith, Martha P.; Samadani, Ramin

    1990-01-01

    Image-processing techniques for extracting the characteristics of lead and pressure ridge features in SAR images of sea ice are reported. The methods are applied to a SAR image of the Beaufort Sea collected from the Seasat satellite on October 3, 1978. Estimates of lead and ridge statistics are made, e.g., lead and ridge density (number of lead or ridge pixels per unit area of image) and the distribution of lead area and orientation as well as ridge length and orientation. The information derived is useful in both ice science and polar operations for such applications as albedo and heat and momentum transfer estimates, as well as ship routing and offshore engineering.

  6. Parallel Computer System for 3D Visualization Stereo on GPU

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Zori, Sergii A.

    2018-03-01

    This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.

  7. Configuration of automatic exposure control on mammography units for computed radiography to match patient dose of screen film systems

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying Joseph; Huang, Weidong

    2009-02-01

    Computed radiography (CR) is considered a drop-in addition or replacement for traditional screen-film (SF) systems in digital mammography. Unlike other technologies, CR has the advantage of being compatible with existing mammography units. One of the challenges, however, is to properly configure the automatic exposure control (AEC) on existing mammography units for CR use. Unlike analogue systems, the capture and display of digital CR images is decoupled. The function of AEC is changed from ensuring proper and consistent optical density of the captured image on film to balancing image quality with patient dose needed for CR. One of the preferences when acquiring CR images under AEC is to use the same patient dose as SF systems. The challenge is whether the existing AEC design and calibration process-most of them proprietary from the X-ray systems manufacturers and tailored specifically for SF response properties-can be adapted for CR cassettes, in order to compensate for their response and attenuation differences. This paper describes the methods for configuring the AEC of three different mammography units models to match the patient dose used for CR with those that are used for a KODAK MIN-R 2000 SF System. Based on phantom test results, these methods provide the dose level under AEC for the CR systems to match with the dose of SF systems. These methods can be used in clinical environments that require the acquisition of CR images under AEC at the same dose levels as those used for SF systems.

  8. Resolving human object recognition in space and time

    PubMed Central

    Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2014-01-01

    A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here, we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively later. Using representational similarity analysis, we combine human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing, with sources in V1 and IT., Finally, human MEG signals were correlated to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision. PMID:24464044

  9. Small Interactive Image Processing System (SMIPS) users manual

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIP) is designed to facilitate the acquisition, digital processing and recording of image data as well as pattern recognition in an interactive mode. Objectives of the system are ease of communication with the computer by personnel who are not expert programmers, fast response to requests for information on pictures, complete error recovery as well as simplification of future programming efforts for extension of the system. The SMIP system is intended for operation under OS/MVT on an IBM 360/75 or 91 computer equipped with the IBM-2250 Model 1 display unit. This terminal is used as an interface between user and main computer. It has an alphanumeric keyboard, a programmed function keyboard and a light pen which are used for specification of input to the system. Output from the system is displayed on the screen as messages and pictures.

  10. The instrument control unit of SPICA SAFARI: a macro-unit to host all the digital control functionalities of the spectrometer

    NASA Astrophysics Data System (ADS)

    Di Giorgio, Anna Maria; Biondi, David; Saggin, Bortolino; Shatalina, Irina; Viterbini, Maurizio; Giusi, Giovanni; Liu, Scige J.; Cerulli-Irelli, Paquale; Van Loon, Dennis; Cara, Christophe

    2012-09-01

    We present the preliminary design of the Instrument Control Unit (ICU) of the SpicA FAR infrared Instrument (SAFARI), an imaging Fourier Transform Spectrometer (FTS) designed to give continuous wavelength coverage in both photometric and spectroscopic modes from around 34 to 210 µm. Due to the stringent requirements in terms of mass and volume, the overall SAFARI warm electronics will be composed by only two main units: Detector Control Unit and ICU. ICU is therefore a macro-unit incorporating the four digital sub-units dedicated to the control of the overall instrument functionalities: the Cooler Control Unit, the Mechanism Control Unit, the Digital processing Unit and the Power Supply Unit. Both the mechanical solution adopted to host the four sub-units and the internal electrical architecture are presented as well as the adopted redundancy approach.

  11. A novel forward projection-based metal artifact reduction method for flat-detector computed tomography.

    PubMed

    Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A

    2009-11-07

    Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).

  12. Accelerating Advanced MRI Reconstructions on GPUs

    PubMed Central

    Stone, S.S.; Haldar, J.P.; Tsao, S.C.; Hwu, W.-m.W.; Sutton, B.P.; Liang, Z.-P.

    2008-01-01

    Computational acceleration on graphics processing units (GPUs) can make advanced magnetic resonance imaging (MRI) reconstruction algorithms attractive in clinical settings, thereby improving the quality of MR images across a broad spectrum of applications. This paper describes the acceleration of such an algorithm on NVIDIA’s Quadro FX 5600. The reconstruction of a 3D image with 1283 voxels achieves up to 180 GFLOPS and requires just over one minute on the Quadro, while reconstruction on a quad-core CPU is twenty-one times slower. Furthermore, relative to the true image, the error exhibited by the advanced reconstruction is only 12%, while conventional reconstruction techniques incur error of 42%. PMID:21796230

  13. Accelerating Advanced MRI Reconstructions on GPUs.

    PubMed

    Stone, S S; Haldar, J P; Tsao, S C; Hwu, W-M W; Sutton, B P; Liang, Z-P

    2008-10-01

    Computational acceleration on graphics processing units (GPUs) can make advanced magnetic resonance imaging (MRI) reconstruction algorithms attractive in clinical settings, thereby improving the quality of MR images across a broad spectrum of applications. This paper describes the acceleration of such an algorithm on NVIDIA's Quadro FX 5600. The reconstruction of a 3D image with 128(3) voxels achieves up to 180 GFLOPS and requires just over one minute on the Quadro, while reconstruction on a quad-core CPU is twenty-one times slower. Furthermore, relative to the true image, the error exhibited by the advanced reconstruction is only 12%, while conventional reconstruction techniques incur error of 42%.

  14. Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis

    NASA Astrophysics Data System (ADS)

    Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.

    2016-07-01

    In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.

  15. 3D wide field-of-view Gabor-domain optical coherence microscopy advancing real-time in-vivo imaging and metrology

    NASA Astrophysics Data System (ADS)

    Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P.

    2017-02-01

    Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.

  16. Cloud Computing for radiologists.

    PubMed

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  17. Help for the Visually Impaired

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.

  18. Cloud Computing for radiologists

    PubMed Central

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560

  19. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  20. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  1. 32 CFR 701.54 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Computer search is based on the total cost of the central processing unit, input-output devices, and memory... charge for office copy up to six images)—$3.50 Each additional image—$ .10 Each typewritten page—$3.50...

  2. 32 CFR 701.54 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Computer search is based on the total cost of the central processing unit, input-output devices, and memory... charge for office copy up to six images)—$3.50 Each additional image—$ .10 Each typewritten page—$3.50...

  3. Ultraviolet Communication for Medical Applications

    DTIC Science & Technology

    2015-06-01

    In the previous Phase I effort, Directed Energy Inc.’s (DEI) parent company Imaging Systems Technology (IST) demonstrated feasibility of several key...accurately model high path loss. Custom photon scatter code was rewritten for parallel execution on a graphics processing unit (GPU). The NVidia CUDA

  4. The Gemini NICI Planet-Finding Campaign: The Companion Detection Pipeline

    NASA Astrophysics Data System (ADS)

    Wahhaj, Zahed; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Close, Laird M.; Hayward, Thomas L.; Hartung, Markus; Chun, Mark; Ftaclas, Christ; Toomey, Douglas W.

    2013-12-01

    We present high-contrast image processing techniques used by the Gemini NICI Planet-Finding Campaign to detect faint companions to bright stars. The Near-Infrared Coronographic Imager (NICI) is an adaptive optics instrument installed on the 8 m Gemini South telescope, capable of angular and spectral difference imaging and specifically designed to image exoplanets. The Campaign data pipeline achieves median contrasts of 12.6 mag at 0.''5 and 14.4 mag at 1'' separation, for a sample of 45 stars (V = 4.3-13.9 mag) from the early phase of the campaign. We also present a novel approach to calculating contrast curves for companion detection based on 95% completeness in the recovery of artificial companions injected into the raw data, while accounting for the false-positive rate. We use this technique to select the image processing algorithms that are more successful at recovering faint simulated point sources. We compare our pipeline to the performance of the Locally Optimized Combination of Images (LOCI) algorithm for NICI data and do not find significant improvement with LOCI. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  5. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  6. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  7. Multi sensor satellite imagers for commercial remote sensing

    NASA Astrophysics Data System (ADS)

    Cronje, T.; Burger, H.; Du Plessis, J.; Du Toit, J. F.; Marais, L.; Strumpfer, F.

    2005-10-01

    This paper will discuss and compare recent refractive and catodioptric imager designs developed and manufactured at SunSpace for Multi Sensor Satellite Imagers with Panchromatic, Multi-spectral, Area and Hyperspectral sensors on a single Focal Plane Array (FPA). These satellite optical systems were designed with applications to monitor food supplies, crop yield and disaster monitoring in mind. The aim of these imagers is to achieve medium to high resolution (2.5m to 15m) spatial sampling, wide swaths (up to 45km) and noise equivalent reflectance (NER) values of less than 0.5%. State-of-the-art FPA designs are discussed and address the choice of detectors to achieve these performances. Special attention is given to thermal robustness and compactness, the use of folding prisms to place multiple detectors in a large FPA and a specially developed process to customize the spectral selection with the need to minimize mass, power and cost. A refractive imager with up to 6 spectral bands (6.25m GSD) and a catodioptric imager with panchromatic (2.7m GSD), multi-spectral (6 bands, 4.6m GSD), hyperspectral (400nm to 2.35μm, 200 bands, 15m GSD) sensors on the same FPA will be discussed. Both of these imagers are also equipped with real time video view finding capabilities. The electronic units could be subdivided into the Front-End Electronics and Control Electronics with analogue and digital signal processing. A dedicated Analogue Front-End is used for Correlated Double Sampling (CDS), black level correction, variable gain and up to 12-bit digitizing and high speed LVDS data link to a mass memory unit.

  8. High performance computing for deformable image registration: towards a new paradigm in adaptive radiotherapy.

    PubMed

    Samant, Sanjiv S; Xia, Junyi; Muyan-Ozcelik, Pinar; Owens, John D

    2008-08-01

    The advent of readily available temporal imaging or time series volumetric (4D) imaging has become an indispensable component of treatment planning and adaptive radiotherapy (ART) at many radiotherapy centers. Deformable image registration (DIR) is also used in other areas of medical imaging, including motion corrected image reconstruction. Due to long computation time, clinical applications of DIR in radiation therapy and elsewhere have been limited and consequently relegated to offline analysis. With the recent advances in hardware and software, graphics processing unit (GPU) based computing is an emerging technology for general purpose computation, including DIR, and is suitable for highly parallelized computing. However, traditional general purpose computation on the GPU is limited because the constraints of the available programming platforms. As well, compared to CPU programming, the GPU currently has reduced dedicated processor memory, which can limit the useful working data set for parallelized processing. We present an implementation of the demons algorithm using the NVIDIA 8800 GTX GPU and the new CUDA programming language. The GPU performance will be compared with single threading and multithreading CPU implementations on an Intel dual core 2.4 GHz CPU using the C programming language. CUDA provides a C-like language programming interface, and allows for direct access to the highly parallel compute units in the GPU. Comparisons for volumetric clinical lung images acquired using 4DCT were carried out. Computation time for 100 iterations in the range of 1.8-13.5 s was observed for the GPU with image size ranging from 2.0 x 10(6) to 14.2 x 10(6) pixels. The GPU registration was 55-61 times faster than the CPU for the single threading implementation, and 34-39 times faster for the multithreading implementation. For CPU based computing, the computational time generally has a linear dependence on image size for medical imaging data. Computational efficiency is characterized in terms of time per megapixels per iteration (TPMI) with units of seconds per megapixels per iteration (or spmi). For the demons algorithm, our CPU implementation yielded largely invariant values of TPMI. The mean TPMIs were 0.527 spmi and 0.335 spmi for the single threading and multithreading cases, respectively, with <2% variation over the considered image data range. For GPU computing, we achieved TPMI =0.00916 spmi with 3.7% variation, indicating optimized memory handling under CUDA. The paradigm of GPU based real-time DIR opens up a host of clinical applications for medical imaging.

  9. A computational approach to real-time image processing for serial time-encoded amplified microscopy

    NASA Astrophysics Data System (ADS)

    Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi

    2016-03-01

    High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.

  10. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  11. Cooperative studies between the United States of America and the People's Republic of China on applications of remote sensing to surveying and mapping

    USGS Publications Warehouse

    Lauer, Donald T.; Chu, Liangcai

    1992-01-01

    A Protocol established between the National Bureau of Surveying and Mapping, People's Republic of China (PRC) and the U.S. Geological Survey, United States of America (US), resulted in the exchange of scientific personnel, technical training, and exploration of the processing of remotely sensed data. These activities were directed toward the application of remotely sensed data to surveying and mapping. Data were processed and various products were generated for the Black Hills area in the US and the Ningxiang area of the PRC. The results of these investigations defined applicable processes in the creation of satellite image maps, land use maps, and the use of ancillary data for further map enhancements.

  12. Landslide Life-Cycle Monitoring and Failure Prediction using Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Bouali, E. H. Y.; Oommen, T.; Escobar-Wolf, R. P.

    2017-12-01

    The consequences of slope instability are severe across the world: the US Geological Survey estimates that, each year, the United States spends $3.5B to repair damages caused by landslides, 25-50 deaths occur, real estate values in affected areas are reduced, productivity decreases, and natural environments are destroyed. A 2012 study by D.N. Petley found that loss of life is typically underestimated and, between 2004 and 2010, 2,620 fatal landslides caused 32,322 deaths around the world. These statistics have led research into the study of landslide monitoring and forecasting. More specifically, this presentation focuses on assessing the potential for using satellite-based optical and radar imagery toward overall landslide life-cycle monitoring and prediction. Radar images from multiple satellites (ERS-1, ERS-2, ENVISAT, and COSMO-SkyMed) are processed using the Persistent Scatterer Interferometry (PSI) technique. Optical images, from the Worldview-2 satellite, are orthorectified and processed using the Co-registration of Optically Sensed Images and Correlation (COSI-Corr) algorithm. Both approaches, process stacks of respective images, yield ground displacement rate values. Ground displacement information is used to generate `inverse-velocity vs time' plots, a proxy relationship that is used to estimate landslide occurrence (slope failure) and derived from a relationship quantified by T. Fukuzono in 1985 and B. Voight in 1988 between a material's time of failure and the strain rate applied to that material. Successful laboratory tests have demonstrated the usefulness of `inverse-velocity vs time' plots. This presentation will investigate the applicability of this approach with remote sensing on natural landslides in the western United States.

  13. Consciousness and values in the quantum universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stapp, H.P.

    1985-01-01

    Application of quantum mechanical description to neurophysiological processes appears to provide for a natural unification of the physical and humanistic sciences. The categories of thought used to represent physical and psychical processes become united, and the mechanical conception of man created by classical physics is replaced by a profoundly different quantum conception. This revised image of man allows human values to be rooted in contemporary science.

  14. Imaging Electron Spectrometer (IES) Electron Preprocessor (EPP) Design

    NASA Technical Reports Server (NTRS)

    Fennell, J. F.; Osborn, J. V.; Christensen, John L. (Technical Monitor)

    2001-01-01

    The Aerospace Corporation developed the Electron PreProcessor (EPP) to support the Imaging Electron Spectrometer (IES) that is part of the RAPID experiment on the ESA/NASA CLUSTER mission. The purpose of the EPP is to collect raw data from the IES and perform processing and data compression on it before transferring it to the RAPID microprocessor system for formatting and transmission to the CLUSTER satellite data system. The report provides a short history of the RAPID and CLUSTER programs and describes the EPP design. Four EPP units were fabricated, tested, and delivered for the original CLUSTER program. These were destroyed during a launch failure. Four more EPP units were delivered for the CLUSTER II program. These were successfully launched and are operating nominally on orbit.

  15. A mobile ferromagnetic shape detection sensor using a Hall sensor array and magnetic imaging.

    PubMed

    Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah

    2011-01-01

    This paper presents a mobile Hall sensor array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the mobile Hall sensor array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of mobile Hall sensor array system for actual shape detection. The results prove that the mobile Hall sensor array system is able to perform magnetic imaging in identifying various ferromagnetic materials.

  16. A Mobile Ferromagnetic Shape Detection Sensor Using a Hall Sensor Array and Magnetic Imaging

    PubMed Central

    Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah

    2011-01-01

    This paper presents a Mobile Hall Sensor Array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the Mobile Hall Sensor Array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of Mobile Hall Sensor Array system for actual shape detection. The results prove that the Mobile Hall Sensor Array system is able to perform magnetic imaging in identifying various ferromagnetic materials. PMID:22346653

  17. Hyper-Spectral Synthesis of Active OB Stars Using GLaDoS

    NASA Astrophysics Data System (ADS)

    Hill, N. R.; Townsend, R. H. D.

    2016-11-01

    In recent years there has been considerable interest in using graphics processing units (GPUs) to perform scientific computations that have traditionally been handled by central processing units (CPUs). However, there is one area where the scientific potential of GPUs has been overlooked - computer graphics, the task they were originally designed for. Here we introduce GLaDoS, a hyper-spectral code which leverages the graphics capabilities of GPUs to synthesize spatially and spectrally resolved images of complex stellar systems. We demonstrate how GLaDoS can be applied to calculate observables for various classes of stars including systems with inhomogenous surface temperatures and contact binaries.

  18. 37 CFR 201.31 - Procedures for copyright restoration in the United States for certain motion pictures and their...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... sounds, images, or both, that are being transmitted, is ‘fixed’ for purposes of this title if a fixation... processing of Statements of Intent. (f) Effective date of restoration of copyright protection. (1) Potential...

  19. 37 CFR 201.31 - Procedures for copyright restoration in the United States for certain motion pictures and their...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that... not requiring a fee for the processing of Statements of Intent. (f) Effective date of restoration of...

  20. 37 CFR 201.31 - Procedures for copyright restoration in the United States for certain motion pictures and their...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that... not requiring a fee for the processing of Statements of Intent. (f) Effective date of restoration of...

  1. 37 CFR 201.31 - Procedures for copyright restoration in the United States for certain motion pictures and their...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that... not requiring a fee for the processing of Statements of Intent. (f) Effective date of restoration of...

  2. 37 CFR 201.31 - Procedures for copyright restoration in the United States for certain motion pictures and their...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that... not requiring a fee for the processing of Statements of Intent. (f) Effective date of restoration of...

  3. GPU-based streaming architectures for fast cone-beam CT image reconstruction and demons deformable registration.

    PubMed

    Sharp, G C; Kandasamy, N; Singh, H; Folkert, M

    2007-10-07

    This paper shows how to significantly accelerate cone-beam CT reconstruction and 3D deformable image registration using the stream-processing model. We describe data-parallel designs for the Feldkamp, Davis and Kress (FDK) reconstruction algorithm, and the demons deformable registration algorithm, suitable for use on a commodity graphics processing unit. The streaming versions of these algorithms are implemented using the Brook programming environment and executed on an NVidia 8800 GPU. Performance results using CT data of a preserved swine lung indicate that the GPU-based implementations of the FDK and demons algorithms achieve a substantial speedup--up to 80 times for FDK and 70 times for demons when compared to an optimized reference implementation on a 2.8 GHz Intel processor. In addition, the accuracy of the GPU-based implementations was found to be excellent. Compared with CPU-based implementations, the RMS differences were less than 0.1 Hounsfield unit for reconstruction and less than 0.1 mm for deformable registration.

  4. A landsat data tiling and compositing approach optimized for change detection in the conterminous United States

    USGS Publications Warehouse

    Nelson, Kurtis; Steinwand, Daniel R.

    2015-01-01

    Annual disturbance maps are produced by the LANDFIRE program across the conterminous United States (CONUS). Existing LANDFIRE disturbance data from 1999 to 2010 are available and current efforts will produce disturbance data through 2012. A tiling and compositing approach was developed to produce bi-annual images optimized for change detection. A tiled grid of 10,000 × 10,000 30 m pixels was defined for CONUS and adjusted to consolidate smaller tiles along national borders, resulting in 98 non-overlapping tiles. Data from Landsat-5,-7, and -8 were re-projected to the tile extents, masked to remove clouds, shadows, water, and snow/ice, then composited using a cosine similarity approach. The resultant images were used in a change detection algorithm to determine areas of vegetation change. This approach enabled more efficient processing compared to using single Landsat scenes, by taking advantage of overlap between adjacent paths, and allowed an automated system to be developed for the entire process.

  5. An Integrated System for Superharmonic Contrast-Enhanced Ultrasound Imaging: Design and Intravascular Phantom Imaging Study.

    PubMed

    Li, Yang; Ma, Jianguo; Martin, K Heath; Yu, Mingyue; Ma, Teng; Dayton, Paul A; Jiang, Xiaoning; Shung, K Kirk; Zhou, Qifa

    2016-09-01

    Superharmonic contrast-enhanced ultrasound imaging, also called acoustic angiography, has previously been used for the imaging of microvasculature. This approach excites microbubble contrast agents near their resonance frequency and receives echoes at nonoverlapping superharmonic bandwidths. No integrated system currently exists could fully support this application. To fulfill this need, an integrated dual-channel transmit/receive system for superharmonic imaging was designed, built, and characterized experimentally. The system was uniquely designed for superharmonic imaging and high-resolution B-mode imaging. A complete ultrasound system including a pulse generator, a data acquisition unit, and a signal processing unit were integrated into a single package. The system was controlled by a field-programmable gate array, on which multiple user-defined modes were implemented. A 6-, 35-MHz dual-frequency dual-element intravascular ultrasound transducer was designed and used for imaging. The system successfully obtained high-resolution B-mode images of coronary artery ex vivo with 45-dB dynamic range. The system was capable of acquiring in vitro superharmonic images of a vasa vasorum mimicking phantom with 30-dB contrast. It could detect a contrast agent filled tissue mimicking tube of 200 μm diameter. For the first time, high-resolution B-mode images and superharmonic images were obtained in an intravascular phantom, made possible by the dedicated integrated system proposed. The system greatly reduced the cost and complexity of the superharmonic imaging intended for preclinical study. Significant: The system showed promise for high-contrast intravascular microvascular imaging, which may have significant importance in assessment of the vasa vasorum associated with atherosclerotic plaques.

  6. Nature, distribution, and origin of Titan’s Undifferentiated Plains

    USGS Publications Warehouse

    Lopes, Rosaly; Malaska, M. J.; Solomonidou, A.; Le, Gall A.; Janssen, M.A.; Neish, Catherine D.; Turtle, E.P.; Birch, S. P. D.; Hayes, A.G.; Radebaugh, J.; Coustenis, A.; Schoenfeld, A.; Stiles, B.W.; Kirk, Randolph L.; Mitchell, K.L.; Stofan, E.R.; Lawrence, K. J.; ,

    2016-01-01

    The Undifferentiated Plains on Titan, first mapped by Lopes et al. (Lopes, R.M.C. et al., 2010. Icarus, 205, 540–588), are vast expanses of terrains that appear radar-dark and fairly uniform in Cassini Synthetic Aperture Radar (SAR) images. As a result, these terrains are often referred to as “blandlands”. While the interpretation of several other geologic units on Titan – such as dunes, lakes, and well-preserved impact craters – has been relatively straightforward, the origin of the Undifferentiated Plains has remained elusive. SAR images show that these “blandlands” are mostly found at mid-latitudes and appear relatively featureless at radar wavelengths, with no major topographic features. Their gradational boundaries and paucity of recognizable features in SAR data make geologic interpretation particularly challenging. We have mapped the distribution of these terrains using SAR swaths up to flyby T92 (July 2013), which cover >50% of Titan’s surface. We compared SAR images with other data sets where available, including topography derived from the SARTopo method and stereo DEMs, the response from RADAR radiometry, hyperspectral imaging data from Cassini’s Visual and Infrared Mapping Spectrometer (VIMS), and near infrared imaging from the Imaging Science Subsystem (ISS). We examined and evaluated different formation mechanisms, including (i) cryovolcanic origin, consisting of overlapping flows of low relief or (ii) sedimentary origins, resulting from fluvial/lacustrine or aeolian deposition, or accumulation of photolysis products created in the atmosphere. Our analysis indicates that the Undifferentiated Plains unit is consistent with a composition predominantly containing organic rather than icy materials and formed by depositional and/or sedimentary processes. We conclude that aeolian processes played a major part in the formation of the Undifferentiated Plains; however, other processes (fluvial, deposition of photolysis products) are likely to have contributed, possibly in differing proportions depending on location.

  7. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  8. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  9. THz near-field spectral encoding imaging using a rainbow metasurface.

    PubMed

    Lee, Kanghee; Choi, Hyun Joo; Son, Jaehyeon; Park, Hyun-Sung; Ahn, Jaewook; Min, Bumki

    2015-09-24

    We demonstrate a fast image acquisition technique in the terahertz range via spectral encoding using a metasurface. The metasurface is composed of spatially varying units of mesh filters that exhibit bandpass features. Each mesh filter is arranged such that the centre frequencies of the mesh filters are proportional to their position within the metasurface, similar to a rainbow. For imaging, the object is placed in front of the rainbow metasurface, and the image is reconstructed by measuring the transmitted broadband THz pulses through both the metasurface and the object. The 1D image information regarding the object is linearly mapped into the spectrum of the transmitted wave of the rainbow metasurface. Thus, 2D images can be successfully reconstructed using simple 1D data acquisition processes.

  10. Applying a visual language for image processing as a graphical teaching tool in medical imaging

    NASA Astrophysics Data System (ADS)

    Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.

  11. Old Fire/Grand Prix Fire, California

    NASA Image and Video Library

    2003-11-19

    On November 18, 2003, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra satellite acquired this image of the Old Fire/Grand Prix fire east of Los Angeles. The image is being processed by NASA's Wildfire Response Team and will be sent to the United States Department of Agriculture's Forest Service Remote Sensing Applications Center (RSAC) which provides interpretation services to Burned Area Emergency Response (BAER) teams to assist in mapping the severity of the burned areas. The image combines data from the visible and infrared wavelength regions to highlight the burned areas. http://photojournal.jpl.nasa.gov/catalog/PIA04879

  12. Positron Emission Tomography (PET)

    DOE R&D Accomplishments Database

    Welch, M. J.

    1990-01-01

    Positron emission tomography (PET) assesses biochemical processes in the living subject, producing images of function rather than form. Using PET, physicians are able to obtain not the anatomical information provided by other medical imaging techniques, but pictures of physiological activity. In metaphoric terms, traditional imaging methods supply a map of the body's roadways, its, anatomy; PET shows the traffic along those paths, its biochemistry. This document discusses the principles of PET, the radiopharmaceuticals in PET, PET research, clinical applications of PET, the cost of PET, training of individuals for PET, the role of the United States Department of Energy in PET, and the futures of PET.

  13. SU-E-I-37: Low-Dose Real-Time Region-Of-Interest X-Ray Fluoroscopic Imaging with a GPU-Accelerated Spatially Different Bilateral Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, H; Lee, J; Pua, R

    2014-06-01

    Purpose: The purpose of our study is to reduce imaging radiation dose while maintaining image quality of region of interest (ROI) in X-ray fluoroscopy. A low-dose real-time ROI fluoroscopic imaging technique which includes graphics-processing-unit- (GPU-) accelerated image processing for brightness compensation and noise filtering was developed in this study. Methods: In our ROI fluoroscopic imaging, a copper filter is placed in front of the X-ray tube. The filter contains a round aperture to reduce radiation dose to outside of the aperture. To equalize the brightness difference between inner and outer ROI regions, brightness compensation was performed by use of amore » simple weighting method that applies selectively to the inner ROI, the outer ROI, and the boundary zone. A bilateral filtering was applied to the images to reduce relatively high noise in the outer ROI images. To speed up the calculation of our technique for real-time application, the GPU-acceleration was applied to the image processing algorithm. We performed a dosimetric measurement using an ion-chamber dosimeter to evaluate the amount of radiation dose reduction. The reduction of calculation time compared to a CPU-only computation was also measured, and the assessment of image quality in terms of image noise and spatial resolution was conducted. Results: More than 80% of dose was reduced by use of the ROI filter. The reduction rate depended on the thickness of the filter and the size of ROI aperture. The image noise outside the ROI was remarkably reduced by the bilateral filtering technique. The computation time for processing each frame image was reduced from 3.43 seconds with single CPU to 9.85 milliseconds with GPU-acceleration. Conclusion: The proposed technique for X-ray fluoroscopy can substantially reduce imaging radiation dose to the patient while maintaining image quality particularly in the ROI region in real-time.« less

  14. Toshiba TDF-500 High Resolution Viewing And Analysis System

    NASA Astrophysics Data System (ADS)

    Roberts, Barry; Kakegawa, M.; Nishikawa, M.; Oikawa, D.

    1988-06-01

    A high resolution, operator interactive, medical viewing and analysis system has been developed by Toshiba and Bio-Imaging Research. This system provides many advanced features including high resolution displays, a very large image memory and advanced image processing capability. In particular, the system provides CRT frame buffers capable of update in one frame period, an array processor capable of image processing at operator interactive speeds, and a memory system capable of updating multiple frame buffers at frame rates whilst supporting multiple array processors. The display system provides 1024 x 1536 display resolution at 40Hz frame and 80Hz field rates. In particular, the ability to provide whole or partial update of the screen at the scanning rate is a key feature. This allows multiple viewports or windows in the display buffer with both fixed and cine capability. To support image processing features such as windowing, pan, zoom, minification, filtering, ROI analysis, multiplanar and 3D reconstruction, a high performance CPU is integrated into the system. This CPU is an array processor capable of up to 400 million instructions per second. To support the multiple viewer and array processors' instantaneous high memory bandwidth requirement, an ultra fast memory system is used. This memory system has a bandwidth capability of 400MB/sec and a total capacity of 256MB. This bandwidth is more than adequate to support several high resolution CRT's and also the fast processing unit. This fully integrated approach allows effective real time image processing. The integrated design of viewing system, memory system and array processor are key to the imaging system. It is the intention to describe the architecture of the image system in this paper.

  15. Hardware Implementation of a Bilateral Subtraction Filter

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven

    2009-01-01

    A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for its position in the window as well as the pixel value for the central pixel of the window. The absolute difference between these two pixel values is calculated and used as an address in a lookup table. Each processing element has a lookup table, unique for its position in the window, containing the weight coefficients for the Gaussian function for that position. The pixel value is multiplied by the weight, and the outputs of the processing element are the weight and pixel-value weight product. The products and weights are fed to the adder tree. The sum of the products and the sum of the weights are fed to the divider, which computes the sum of products the sum of weights. The output of the divider is denoted the bilateral smoothed image. The smoothing function is a simple weighted average computed over a 3 3 subwindow centered in the 9 9 window. After smoothing, the image is delayed by an additional amount of time needed to match the processing time for computing the bilateral smoothed image. The bilateral smoothed image is then subtracted from the 3 3 smoothed image to produce the final output. The prototype filter as implemented in a commercially available FPGA processes one pixel per clock cycle. Operation at a clock speed of 66 MHz has been demonstrated, and results of a static timing analysis have been interpreted as suggesting that the clock speed could be increased to as much as 100 MHz.

  16. Efficient image acquisition design for a cancer detection system

    NASA Astrophysics Data System (ADS)

    Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2013-09-01

    Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.

  17. Programmability in AIPS++

    NASA Technical Reports Server (NTRS)

    Hjellming, R. M.

    1992-01-01

    AIPS++ is an Astronomical Information Processing System being designed and implemented by an international consortium of NRAO and six other radio astronomy institutions in Australia, India, the Netherlands, the United Kingdom, Canada, and the USA. AIPS++ is intended to replace the functionality of AIPS, to be more easily programmable, and will be implemented in C++ using object-oriented techniques. Programmability in AIPS++ is planned at three levels. The first level will be that of a command-line interpreter with characteristics similar to IDL and PV-Wave, but with an intensive set of operations appropriate to telescope data handling, image formation, and image processing. The third level will be in C++ with extensive use of class libraries for both basic operations and advanced applications. The third level will allow input and output of data between external FORTRAN programs and AIPS++ telescope and image databases. In addition to summarizing the above programmability characteristics, this talk will given an overview of the classes currently being designed for telescope data calibration and editing, image formation, and the 'toolkit' of mathematical 'objects' that will perform most of the processing in AIPS++.

  18. Development of a hybrid image processing algorithm for automatic evaluation of intramuscular fat content in beef M. longissimus dorsi.

    PubMed

    Du, Cheng-Jin; Sun, Da-Wen; Jackman, Patrick; Allen, Paul

    2008-12-01

    An automatic method for estimating the content of intramuscular fat (IMF) in beef M. longissimus dorsi (LD) was developed using a sequence of image processing algorithm. To extract IMF particles within the LD muscle from structural features of intermuscular fat surrounding the muscle, three steps of image processing algorithm were developed, i.e. bilateral filter for noise removal, kernel fuzzy c-means clustering (KFCM) for segmentation, and vector confidence connected and flood fill for IMF extraction. The technique of bilateral filtering was firstly applied to reduce the noise and enhance the contrast of the beef image. KFCM was then used to segment the filtered beef image into lean, fat, and background. The IMF was finally extracted from the original beef image by using the techniques of vector confidence connected and flood filling. The performance of the algorithm developed was verified by correlation analysis between the IMF characteristics and the percentage of chemically extractable IMF content (P<0.05). Five IMF features are very significantly correlated with the fat content (P<0.001), including count densities of middle (CDMiddle) and large (CDLarge) fat particles, area densities of middle and large fat particles, and total fat area per unit LD area. The highest coefficient is 0.852 for CDLarge.

  19. FAST: framework for heterogeneous medical image computing and visualization.

    PubMed

    Smistad, Erik; Bozorgi, Mohammadmehdi; Lindseth, Frank

    2015-11-01

    Computer systems are becoming increasingly heterogeneous in the sense that they consist of different processors, such as multi-core CPUs and graphic processing units. As the amount of medical image data increases, it is crucial to exploit the computational power of these processors. However, this is currently difficult due to several factors, such as driver errors, processor differences, and the need for low-level memory handling. This paper presents a novel FrAmework for heterogeneouS medical image compuTing and visualization (FAST). The framework aims to make it easier to simultaneously process and visualize medical images efficiently on heterogeneous systems. FAST uses common image processing programming paradigms and hides the details of memory handling from the user, while enabling the use of all processors and cores on a system. The framework is open-source, cross-platform and available online. Code examples and performance measurements are presented to show the simplicity and efficiency of FAST. The results are compared to the insight toolkit (ITK) and the visualization toolkit (VTK) and show that the presented framework is faster with up to 20 times speedup on several common medical imaging algorithms. FAST enables efficient medical image computing and visualization on heterogeneous systems. Code examples and performance evaluations have demonstrated that the toolkit is both easy to use and performs better than existing frameworks, such as ITK and VTK.

  20. City of Flagstaff Project: Ground Water Resource Evaluation, Remote Sensing Component

    USGS Publications Warehouse

    Chavez, Pat S.; Velasco, Miguel G.; Bowell, Jo-Ann; Sides, Stuart C.; Gonzalez, Rosendo R.; Soltesz, Deborah L.

    1996-01-01

    Many regions, cities, and towns in the Western United States need new or expanded water resources because of both population growth and increased development. Any tools or data that can help in the evaluation of an area's potential water resources must be considered for this increasingly critical need. Remotely sensed satellite images and subsequent digital image processing have been under-utilized in ground water resource evaluation and exploration. Satellite images can be helpful in detecting and mapping an area's regional structural patterns, including major fracture and fault systems, two important geologic settings for an area's surface to ground water relations. Within the United States Geological Survey's (USGS) Flagstaff Field Center, expertise and capabilities in remote sensing and digital image processing have been developed over the past 25 years through various programs. For the City of Flagstaff project, this expertise and these capabilities were combined with traditional geologic field mapping to help evaluate ground water resources in the Flagstaff area. Various enhancement and manipulation procedures were applied to the digital satellite images; the results, in both digital and hardcopy format, were used for field mapping and analyzing the regional structure. Relative to surface sampling, remotely sensed satellite and airborne images have improved spatial coverage that can help study, map, and monitor the earth surface at local and/or regional scales. Advantages offered by remotely sensed satellite image data include: 1. a synoptic/regional view compared to both aerial photographs and ground sampling, 2. cost effectiveness, 3. high spatial resolution and coverage compared to ground sampling, and 4. relatively high temporal coverage on a long term basis. Remotely sensed images contain both spectral and spatial information. The spectral information provides various properties and characteristics about the surface cover at a given location or pixel (that is, vegetation and/or soil type). The spatial information gives the distribution, variation, and topographic relief of the cover types from pixel to pixel. Therefore, the main characteristics that determine a pixel's brightness/reflectance and, consequently, the digital number (DN) assigned to the pixel, are the physical properties of the surface and near surface, the cover type, and the topographic slope. In this application, the ability to detect and map lineaments, especially those related to fractures and faults, is critical. Therefore, the extraction of spatial information from the digital images was of prime interest in this project. The spatial information varies among the different spectral bands available; in particular, a near infrared spectral band is better than a visible band when extracting spatial information in highly vegetated areas. In this study, both visible and near infrared bands were analyzed and used to extract the desired spatial information from the images. The wide swath coverage of remotely sensed satellite digital images makes them ideal for regional analysis and mapping. Since locating and mapping highly fractured and faulted areas is a major requirement for ground water resource evaluation and exploration this aspect of satellite images was considered critical; it allowed us to stand back (actually up about 440 miles), look at, and map the regional structural setting of the area. The main focus of the remote sensing and digital image processing component of this project was to use both remotely sensed digital satellite images and a Digital Elevation Model (DEM) to extract spatial information related to the structural and topographic patterns in the area. The data types used were digital satellite images collected by the United States' Landsat Thematic Mapper (TM) and French Systeme Probatoire d'Observation de laTerre (SPOT) imaging systems, along with a DEM of the Flagstaff region. The USGS Mini Image Processing Sy

  1. Live imaging of developmental processes in a living meristem of Davidia involucrata (Nyssaceae)

    PubMed Central

    Jerominek, Markus; Bull-Hereñu, Kester; Arndt, Melanie; Claßen-Bockhoff, Regine

    2014-01-01

    Morphogenesis in plants is usually reconstructed by scanning electron microscopy and histology of meristematic structures. These techniques are destructive and require many samples to obtain a consecutive series of states. Unfortunately, using this methodology the absolute timing of growth and complete relative initiation of organs remain obscure. To overcome this limitation, an in vivo observational method based on Epi-Illumination Light Microscopy (ELM) was developed and tested with a male inflorescence meristem (floral unit) of the handkerchief tree Davidia involucrata Baill. (Nyssaceae). We asked whether the most basal flowers of this floral unit arise in a basipetal sequence or, alternatively, are delayed in their development. The growing meristem was observed for 30 days, the longest live observation of a meristem achieved to date. The sequence of primordium initiation indicates a later initiation of the most basal flowers and not earlier or simultaneously as SEM images could suggest. D. involucrata exemplarily shows that live-ELM gives new insights into developmental processes of plants. In addition to morphogenetic questions such as the transition from vegetative to reproductive meristems or the absolute timing of ontogenetic processes, this method may also help to quantify cellular growth processes in the context of molecular physiology and developmental genetics studies. PMID:25431576

  2. Live imaging of developmental processes in a living meristem of Davidia involucrata (Nyssaceae).

    PubMed

    Jerominek, Markus; Bull-Hereñu, Kester; Arndt, Melanie; Claßen-Bockhoff, Regine

    2014-01-01

    Morphogenesis in plants is usually reconstructed by scanning electron microscopy and histology of meristematic structures. These techniques are destructive and require many samples to obtain a consecutive series of states. Unfortunately, using this methodology the absolute timing of growth and complete relative initiation of organs remain obscure. To overcome this limitation, an in vivo observational method based on Epi-Illumination Light Microscopy (ELM) was developed and tested with a male inflorescence meristem (floral unit) of the handkerchief tree Davidia involucrata Baill. (Nyssaceae). We asked whether the most basal flowers of this floral unit arise in a basipetal sequence or, alternatively, are delayed in their development. The growing meristem was observed for 30 days, the longest live observation of a meristem achieved to date. The sequence of primordium initiation indicates a later initiation of the most basal flowers and not earlier or simultaneously as SEM images could suggest. D. involucrata exemplarily shows that live-ELM gives new insights into developmental processes of plants. In addition to morphogenetic questions such as the transition from vegetative to reproductive meristems or the absolute timing of ontogenetic processes, this method may also help to quantify cellular growth processes in the context of molecular physiology and developmental genetics studies.

  3. Accelerating Fibre Orientation Estimation from Diffusion Weighted Magnetic Resonance Imaging Using GPUs

    PubMed Central

    Hernández, Moisés; Guerrero, Ginés D.; Cecilia, José M.; García, José M.; Inuggi, Alberto; Jbabdi, Saad; Behrens, Timothy E. J.; Sotiropoulos, Stamatios N.

    2013-01-01

    With the performance of central processing units (CPUs) having effectively reached a limit, parallel processing offers an alternative for applications with high computational demands. Modern graphics processing units (GPUs) are massively parallel processors that can execute simultaneously thousands of light-weight processes. In this study, we propose and implement a parallel GPU-based design of a popular method that is used for the analysis of brain magnetic resonance imaging (MRI). More specifically, we are concerned with a model-based approach for extracting tissue structural information from diffusion-weighted (DW) MRI data. DW-MRI offers, through tractography approaches, the only way to study brain structural connectivity, non-invasively and in-vivo. We parallelise the Bayesian inference framework for the ball & stick model, as it is implemented in the tractography toolbox of the popular FSL software package (University of Oxford). For our implementation, we utilise the Compute Unified Device Architecture (CUDA) programming model. We show that the parameter estimation, performed through Markov Chain Monte Carlo (MCMC), is accelerated by at least two orders of magnitude, when comparing a single GPU with the respective sequential single-core CPU version. We also illustrate similar speed-up factors (up to 120x) when comparing a multi-GPU with a multi-CPU implementation. PMID:23658616

  4. General-purpose interface bus for multiuser, multitasking computer system

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1990-01-01

    The architecture of a multiuser, multitasking, virtual-memory computer system intended for the use by a medium-size research group is described. There are three central processing units (CPU) in the configuration, each with 16 MB memory, and two 474 MB hard disks attached. CPU 1 is designed for data analysis and contains an array processor for fast-Fourier transformations. In addition, CPU 1 shares display images viewed with the image processor. CPU 2 is designed for image analysis and display. CPU 3 is designed for data acquisition and contains 8 GPIB channels and an analog-to-digital conversion input/output interface with 16 channels. Up to 9 users can access the third CPU simultaneously for data acquisition. Focus is placed on the optimization of hardware interfaces and software, facilitating instrument control, data acquisition, and processing.

  5. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  6. GOES-R ABI Optics Test

    NASA Image and Video Library

    2016-08-31

    With the lights out, team members perform an optics test on the Advanced Baseline Imager, the primary optical instrument, on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. Carbon dioxide is sprayed on the imager to clean it and test its sensitivity. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.

  7. GOES-R ABI Optics Test

    NASA Image and Video Library

    2016-08-31

    Team members prepare for an optics test on the Advanced Baseline Imager, the primary optical instrument, on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. Carbon dioxide will be sprayed on the imager to clean it and test its sensitivity. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.

  8. Structural Acoustic UXO Detection and Identification in Marine Environments - Interim Report for SERDP MR-2103 Follow-On

    DTIC Science & Technology

    2015-07-30

    into the image processing algorithm the AUV position data available from the Doppler Velocity Log (DVL) and Inertial Measurement Unit ( IMU ) systems...uncertainty due to unknown sensor z coordinates. We considered both AUV altitude and roll but not pitch which we assumed to have a small effect on the...buried target. Taken together, the images suggest that the block is buried horizontally but rolled along its long axis ~80° such that the exposed large

  9. Improving spinning disk confocal microscopy by preventing pinhole cross-talk for intravital imaging

    PubMed Central

    Shimozawa, Togo; Yamagata, Kazuo; Kondo, Takefumi; Hayashi, Shigeo; Shitamukai, Atsunori; Konno, Daijiro; Matsuzaki, Fumio; Takayama, Jun; Onami, Shuichi; Nakayama, Hiroshi; Kosugi, Yasuhito; Watanabe, Tomonobu M.; Fujita, Katsumasa; Mimori-Kiyosue, Yuko

    2013-01-01

    A recent key requirement in life sciences is the observation of biological processes in their natural in vivo context. However, imaging techniques that allow fast imaging with higher resolution in 3D thick specimens are still limited. Spinning disk confocal microscopy using a Yokogawa Confocal Scanner Unit, which offers high-speed multipoint confocal live imaging, has been found to have wide utility among cell biologists. A conventional Confocal Scanner Unit configuration, however, is not optimized for thick specimens, for which the background noise attributed to “pinhole cross-talk,” which is unintended pinhole transmission of out-of-focus light, limits overall performance in focal discrimination and reduces confocal capability. Here, we improve spinning disk confocal microscopy by eliminating pinhole cross-talk. First, the amount of pinhole cross-talk is reduced by increasing the interpinhole distance. Second, the generation of out-of-focus light is prevented by two-photon excitation that achieves selective-plane illumination. We evaluate the effect of these modifications and test the applicability to the live imaging of green fluorescent protein-expressing model animals. As demonstrated by visualizing the fine details of the 3D cell shape and submicron-size cytoskeletal structures inside animals, these strategies dramatically improve higher-resolution intravital imaging. PMID:23401517

  10. Improving spinning disk confocal microscopy by preventing pinhole cross-talk for intravital imaging.

    PubMed

    Shimozawa, Togo; Yamagata, Kazuo; Kondo, Takefumi; Hayashi, Shigeo; Shitamukai, Atsunori; Konno, Daijiro; Matsuzaki, Fumio; Takayama, Jun; Onami, Shuichi; Nakayama, Hiroshi; Kosugi, Yasuhito; Watanabe, Tomonobu M; Fujita, Katsumasa; Mimori-Kiyosue, Yuko

    2013-02-26

    A recent key requirement in life sciences is the observation of biological processes in their natural in vivo context. However, imaging techniques that allow fast imaging with higher resolution in 3D thick specimens are still limited. Spinning disk confocal microscopy using a Yokogawa Confocal Scanner Unit, which offers high-speed multipoint confocal live imaging, has been found to have wide utility among cell biologists. A conventional Confocal Scanner Unit configuration, however, is not optimized for thick specimens, for which the background noise attributed to "pinhole cross-talk," which is unintended pinhole transmission of out-of-focus light, limits overall performance in focal discrimination and reduces confocal capability. Here, we improve spinning disk confocal microscopy by eliminating pinhole cross-talk. First, the amount of pinhole cross-talk is reduced by increasing the interpinhole distance. Second, the generation of out-of-focus light is prevented by two-photon excitation that achieves selective-plane illumination. We evaluate the effect of these modifications and test the applicability to the live imaging of green fluorescent protein-expressing model animals. As demonstrated by visualizing the fine details of the 3D cell shape and submicron-size cytoskeletal structures inside animals, these strategies dramatically improve higher-resolution intravital imaging.

  11. Spatiotemporal Pixelization to Increase the Recognition Score of Characters for Retinal Prostheses

    PubMed Central

    Kim, Hyun Seok; Park, Kwang Suk

    2017-01-01

    Most of the retinal prostheses use a head-fixed camera and a video processing unit. Some studies proposed various image processing methods to improve visual perception for patients. However, previous studies only focused on using spatial information. The present study proposes a spatiotemporal pixelization method mimicking fixational eye movements to generate stimulation images for artificial retina arrays by combining spatial and temporal information. Input images were sampled with a resolution that was four times higher than the number of pixel arrays. We subsampled this image and generated four different phosphene images. We then evaluated the recognition scores of characters by sequentially presenting phosphene images with varying pixel array sizes (6 × 6, 8 × 8 and 10 × 10) and stimulus frame rates (10 Hz, 15 Hz, 20 Hz, 30 Hz, and 60 Hz). The proposed method showed the highest recognition score at a stimulus frame rate of approximately 20 Hz. The method also significantly improved the recognition score for complex characters. This method provides a new way to increase practical resolution over restricted spatial resolution by merging the higher resolution image into high-frame time slots. PMID:29073735

  12. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  13. Alterations in affective processing of attack images following September 11, 2001.

    PubMed

    Tso, Ivy F; Chiu, Pearl H; King-Casas, Brooks R; Deldin, Patricia J

    2011-10-01

    The events of September 11, 2001 created unprecedented uncertainty about safety in the United States and created an aftermath with significant psychological impact across the world. This study examined emotional information encoding in 31 healthy individuals whose stress response symptoms ranged from none to a moderate level shortly after the attacks as assessed by the Impact of Event Scale-Revised. Participants viewed attack-related, negative (but attack-irrelevant), and neutral images while their event-related brain potentials (ERPs) were recorded. Attack images elicited enhanced P300 relative to negative and neutral images, and emotional images prompted larger slow waves than neutral images did. Total symptoms were correlated with altered N2, P300, and slow wave responses during valence processing. Specifically, hyperarousal and intrusion symptoms were associated with diminished stimulus discrimination between neutral and unpleasant images; avoidance symptoms were associated with hypervigilance, as suggested by reduced P300 difference between attack and other images and reduced appraisal of attack images as indicated by attenuated slow wave. The findings in this minimally symptomatic sample are compatible with the alterations in cognition in the posttraumatic stress disorder (PTSD) literature and are consistent with a dimensional model of PTSD. Copyright © 2011 International Society for Traumatic Stress Studies.

  14. Fabrication of a wide-field NIR integral field unit for SWIMS using ultra-precision cutting

    NASA Astrophysics Data System (ADS)

    Kitagawa, Yutaro; Yamagata, Yutaka; Morita, Shin-ya; Motohara, Kentaro; Ozaki, Shinobu; Takahashi, Hidenori; Konishi, Masahiro; Kato, Natsuko M.; Kobayakawa, Yutaka; Terao, Yasunori; Ohashi, Hirofumi

    2016-07-01

    We describe overview of fabrication methods and measurement results of test fabrications of optical surfaces for an integral field unit (IFU) for Simultaneous color Wide-field Infrared Multi-object Spectrograph, SWIMS, which is a first-generation instrument for the University of Tokyo Atacama Observatory 6.5-m telescope. SWIMS-IFU provides entire near-infrared spectrum from 0.9 to 2.5 μm simultaneously covering wider field of view of 17" × 13" compared with current near-infrared IFUs. We investigate an ultra-precision cutting technique to monolithically fabricate optical surfaces of IFU optics such as an image slicer. Using 4- or 5-axis ultra precision machine we compare the milling process and shaper cutting process to find the best way of fabrication of image slicers. The measurement results show that the surface roughness almost satisfies our requirement in both of two methods. Moreover, we also obtain ideal surface form in the shaper cutting process. This method will be adopted to other mirror arrays (i.e. pupil mirror and slit mirror, and such monolithic fabrications will also help us to considerably reduce alignment procedure of each optical elements.

  15. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  16. Electronic photography at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Holm, Jack M.

    1994-01-01

    The field of photography began a metamorphosis several years ago which promises to fundamentally change how images are captured, transmitted, and output. At this time the metamorphosis is still in the early stages, but already new processes, hardware, and software are allowing many individuals and organizations to explore the entry of imaging into the information revolution. Exploration at this time is prerequisite to leading expertise in the future, and a number of branches at LaRC have ventured into electronic and digital imaging. Their progress until recently has been limited by two factors: the lack of an integrated approach and the lack of an electronic photographic capability. The purpose of the research conducted was to address these two items. In some respects, the lack of electronic photographs has prevented application of an integrated imaging approach. Since everything could not be electronic, the tendency was to work with hard copy. Over the summer, the Photographics Section has set up an Electronic Photography Laboratory. This laboratory now has the capability to scan film images, process the images, and output the images in a variety of forms. Future plans also include electronic capture capability. The current forms of image processing available include sharpening, noise reduction, dust removal, tone correction, color balancing, image editing, cropping, electronic separations, and halftoning. Output choices include customer specified electronic file formats which can be output on magnetic or optical disks or over the network, 4400 line photographic quality prints and transparencies to 8.5 by 11 inches, and 8000 line film negatives and transparencies to 4 by 5 inches. The problem of integrated imaging involves a number of branches at LaRC including Visual Imaging, Research Printing and Publishing, Data Visualization and Animation, Advanced Computing, and various research groups. These units must work together to develop common approaches to image processing and archiving. The ultimate goal is to be able to search for images using an on-line database and image catalog. These images could then be retrieved over the network as needed, along with information on the acquisition and processing prior to storage. For this goal to be realized, a number of standard processing protocols must be developed to allow the classification of images into categories. Standard series of processing algorithms can then be applied to each category (although many of these may be adaptive between images). Since the archived image files would be standardized, it should also be possible to develop standard output processing protocols for a number of output devices. If LaRC continues the research effort begun this summer, it may be one of the first organizations to develop an integrated approach to imaging. As such, it could serve as a model for other organizations in government and the private sector.

  17. The Decision To Recruit Online: A Descriptive Study.

    ERIC Educational Resources Information Center

    Galanaki, Eleanna

    2002-01-01

    Responses from 34 of 99 United Kingdom information technology companies explored effects of cost effectiveness, response rate and quality, company image, targeting, time and effort, and overload. The effectiveness of online recruiting depends largely on its implementation and the quality of the recruitment process as a whole. (Contains 38…

  18. Real-time Graphics Processing Unit Based Fourier Domain Optical Coherence Tomography and Surgical Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Kang

    2011-12-01

    In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were performed with different view angles to allow accurate monitoring of the micro-manipulation, and the user to clearly monitor tool-to-target spatial relation in real-time. The system was also validated by imaging multiple biological samples, such as human fingerprint, human cadaver head and small animals. Compared to conventional surgical microscopes, GPU-based real-time FD-OCT can provide the surgeons with a real-time comprehensive spatial view of the microsurgical region and accurate depth perception.

  19. Bouguer Images of the North American Craton

    NASA Technical Reports Server (NTRS)

    Arvidson, R. E.; Bindschadler, D.; Bowring, S.; Eddy, M.; Guinness, E.; Leff, C.

    1985-01-01

    Processing of existing gravity and aeromagnetic data with modern methods is providing new insights into crustal and mantle structures for large parts of the United States and Canada. More than three-quarters of a million ground station readings of gravity are now available for this region. These data offer a wealth of information on crustal and mantle structures when reduced and displayed as Bouguer anomalies, where lateral variations are controlled by the size, shape and densities of underlying materials. Digital image processing techniques were used to generate Bouguer images that display more of the granularity inherent in the data as compared with existing contour maps. A dominant NW-SE linear trend of highs and lows can be seen extending from South Dakota, through Nebaska, and into Missouri. This trend is probably related to features created during an early and perhaps initial episode of crustal assembly by collisional processes. The younger granitic materials are probably a thin cover over an older crust.

  20. Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.

    PubMed

    Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas

    2017-10-01

    We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.

  1. Multifocus watermarking approach based on discrete cosine transform.

    PubMed

    Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila

    2016-05-01

    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.

  2. GIFTS SM EDU Radiometric and Spectral Calibrations

    NASA Technical Reports Server (NTRS)

    Tian, J.; Reisse, R. a.; Johnson, D. G.; Gazarik, J. J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument gathers measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration. The calibration procedures can be subdivided into three categories: the pre-calibration stage, the calibration stage, and finally, the post-calibration stage. Detailed derivations for each stage are presented in this paper.

  3. A Design Verification of the Parallel Pipelined Image Processings

    NASA Astrophysics Data System (ADS)

    Wasaki, Katsumi; Harai, Toshiaki

    2008-11-01

    This paper presents a case study of the design and verification of a parallel and pipe-lined image processing unit based on an extended Petri net, which is called a Logical Colored Petri net (LCPN). This is suitable for Flexible-Manufacturing System (FMS) modeling and discussion of structural properties. LCPN is another family of colored place/transition-net(CPN) with the addition of the following features: integer value assignment of marks, representation of firing conditions as marks' value based formulae, and coupling of output procedures with transition firing. Therefore, to study the behavior of a system modeled with this net, we provide a means of searching the reachability tree for markings.

  4. Crew Earth Observations (CEO) taken during Expedition 9

    NASA Image and Video Library

    2004-06-18

    ISS009-E-12441 (18 June 2004) --- Gebel (or Mount) Edmonstone is featured in this image photographed by an Expedition 9 crewmember on the International Space Station (ISS). Mount Edmonstone is a flat-topped mesa located near the Dahkla Oasis south of Cairo, Egypt. Gebel Edmonstone is a remnant of an eroding scarp that extends for over 200 kilometers (125 miles) east-southeast to west-northwest (visible in the upper left corner of this image). The flat caprock of both the scarp and Mount Edmonstone is chalky limestone underlain by fossil-bearing shale and fine-grained sedimentary rocks. This photograph has been “stretched” to enhance color variations in the various rock and soil units. The color variations reflect differences in composition (or weathering) of the various rock units. The limestone unit capping Gebel Edmonstone and the adjacent scarp ranges from white to gray in color, while the underlying fine-grained sedimentary layers are blue-gray. Hill slope pathways for sediment movement down slope are clearly visible as brown to tan streamers originating from Gebel Edmonstone. Barchan dune fields are also visible in this color-enhanced image, and are distinct due to their mineralogical composition. Evaporite deposits are bright white, while vegetated portions of the Oasis— mostly agricultural fields—are dark blue-black. This additional information obtained from image enhancement can be used for geologic mapping and investigation of surficial processes operating in the region.

  5. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  6. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    NASA Astrophysics Data System (ADS)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  7. New Processing of Spaceborne Imaging Radar-C (SIR-C) Data

    NASA Astrophysics Data System (ADS)

    Meyer, F. J.; Gracheva, V.; Arko, S. A.; Labelle-Hamer, A. L.

    2017-12-01

    The Spaceborne Imaging Radar-C (SIR-C) was a radar system, which successfully operated on two separate shuttle missions in April and October 1994. During these two missions, a total of 143 hours of radar data were recorded. SIR-C was the first multifrequency and polarimetric spaceborne radar system, operating in dual frequency (L- and C- band) and with quad-polarization. SIR-C had a variety of different operating modes, which are innovative even from today's point of view. Depending on the mode, it was possible to acquire data with different polarizations and carrier frequency combinations. Additionally, different swaths and bandwidths could be used during the data collection and it was possible to receive data with two antennas in the along-track direction.The United States Geological Survey (USGS) distributes the synthetic aperture radar (SAR) images as single-look complex (SLC) and multi-look complex (MLC) products. Unfortunately, since June 2005 the SIR-C processor has been inoperable and not repairable. All acquired SLC and MLC images were processed with a course resolution of 100 m with the goal of generating a quick look. These images are however not well suited for scientific analysis. Only a small percentage of the acquired data has been processed as full resolution SAR images and the unprocessed high resolution data cannot be processed any more at the moment.At the Alaska Satellite Facility (ASF) a new processor was developed to process binary SIR-C data to full resolution SAR images. ASF is planning to process the entire recoverable SIR-C archive to full resolution SLCs, MLCs and high resolution geocoded image products. ASF will make these products available to the science community through their existing data archiving and distribution system.The final paper will describe the new processor and analyze the challenges of reprocessing the SIR-C data.

  8. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine

    PubMed Central

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2016-01-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473

  9. Quantitative Analysis of Venus Radar Backscatter Data in ArcGIS

    NASA Technical Reports Server (NTRS)

    Long, S. M.; Grosfils, E. B.

    2005-01-01

    Ongoing mapping of the Ganiki Planitia (V14) quadrangle of Venus and definition of material units has involved an integrated but qualitative analysis of Magellan radar backscatter images and topography using standard geomorphological mapping techniques. However, such analyses do not take full advantage of the quantitative information contained within the images. Analysis of the backscatter coefficient allows a much more rigorous statistical comparison between mapped units, permitting first order selfsimilarity tests of geographically separated materials assigned identical geomorphological labels. Such analyses cannot be performed directly on pixel (DN) values from Magellan backscatter images, because the pixels are scaled to the Muhleman law for radar echoes on Venus and are not corrected for latitudinal variations in incidence angle. Therefore, DN values must be converted based on pixel latitude back to their backscatter coefficient values before accurate statistical analysis can occur. Here we present a method for performing the conversions and analysis of Magellan backscatter data using commonly available ArcGIS software and illustrate the advantages of the process for geological mapping.

  10. Imaging of the interaction of low frequency electric fields with biological tissues by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Peña, Adrian F.; Devine, Jack; Doronin, Alexander; Meglinski, Igor

    2014-03-01

    We report the use of conventional Optical Coherence Tomography (OCT) for visualization of propagation of low frequency electric field in soft biological tissues ex vivo. To increase the overall quality of the experimental images an adaptive Wiener filtering technique has been employed. Fourier domain correlation has been subsequently applied to enhance spatial resolution of images of biological tissues influenced by low frequency electric field. Image processing has been performed on Graphics Processing Units (GPUs) utilizing Compute Unified Device Architecture (CUDA) framework in the frequencydomain. The results show that variation in voltage and frequency of the applied electric field relates exponentially to the magnitude of its influence on biological tissue. The magnitude of influence is about twice more for fresh tissue samples in comparison to non-fresh ones. The obtained results suggest that OCT can be used for observation and quantitative evaluation of the electro-kinetic changes in biological tissues under different physiological conditions, functional electrical stimulation, and potentially can be used non-invasively for food quality control.

  11. Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.

    PubMed

    Chou, Cheng-Ying; Dong, Yun; Hung, Yukai; Kao, Yu-Jiun; Wang, Weichung; Kao, Chien-Min; Chen, Chin-Tu

    2012-01-01

    Positron emission tomography (PET) is an important imaging modality in both clinical usage and research studies. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads, which produce more than 224 million lines of response and thus request dramatic computational demands. In this work, we employed a state-of-the-art graphics processing unit (GPU), NVIDIA Tesla C2070, to yield an efficient reconstruction process. Our approaches ingeniously integrate the distinguished features of the symmetry properties of the imaging system and GPU architectures, including block/warp/thread assignments and effective memory usage, to accelerate the computations for ordered subset expectation maximization (OSEM) image reconstruction. The OSEM reconstruction algorithms were implemented employing both CPU-based and GPU-based codes, and their computational performance was quantitatively analyzed and compared. The results showed that the GPU-accelerated scheme can drastically reduce the reconstruction time and thus can largely expand the applicability of the dual-head PET system.

  12. A Wireless Capsule Endoscope System With Low-Power Controlling and Processing ASIC.

    PubMed

    Xinkai Chen; Xiaoyu Zhang; Linwei Zhang; Xiaowen Li; Nan Qi; Hanjun Jiang; Zhihua Wang

    2009-02-01

    This paper presents the design of a wireless capsule endoscope system. The proposed system is mainly composed of a CMOS image sensor, a RF transceiver and a low-power controlling and processing application specific integrated circuit (ASIC). Several design challenges involving system power reduction, system miniaturization and wireless wake-up method are resolved by employing optimized system architecture, integration of an area and power efficient image compression module, a power management unit (PMU) and a novel wireless wake-up subsystem with zero standby current in the ASIC design. The ASIC has been fabricated in 0.18-mum CMOS technology with a die area of 3.4 mm * 3.3 mm. The digital baseband can work under a power supply down to 0.95 V with a power dissipation of 1.3 mW. The prototype capsule based on the ASIC and a data recorder has been developed. Test result shows that proposed system architecture with local image compression lead to an average of 45% energy reduction for transmitting an image frame.

  13. Gallbladder Boundary Segmentation from Ultrasound Images Using Active Contour Model

    NASA Astrophysics Data System (ADS)

    Ciecholewski, Marcin

    Extracting the shape of the gallbladder from an ultrasonography (US) image allows superfluous information which is immaterial in the diagnostic process to be eliminated. In this project an active contour model was used to extract the shape of the gallbladder, both for cases free of lesions, and for those showing specific disease units, namely: lithiasis, polyps and changes in the shape of the organ, such as folds or turns of the gallbladder. The approximate shape of the gallbladder was found by applying the motion equation model. The tests conducted have shown that for the 220 US images of the gallbladder, the area error rate (AER) amounted to 18.15%.

  14. A survey of GPU-based acceleration techniques in MRI reconstructions

    PubMed Central

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou

    2018-01-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community. PMID:29675361

  15. A survey of GPU-based acceleration techniques in MRI reconstructions.

    PubMed

    Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou; Liang, Dong

    2018-03-01

    Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welch, M.J.

    Positron emission tomography (PET) assesses biochemical processes in the living subject, producing images of function rather than form. Using PET, physicians are able to obtain not the anatomical information provided by other medical imaging techniques, but pictures of physiological activity. In metaphoric terms, traditional imaging methods supply a map of the body's roadways, its, anatomy; PET shows the traffic along those paths, its biochemistry. This document discusses the principles of PET, the radiopharmaceuticals in PET, PET research, clinical applications of PET, the cost of PET, training of individuals for PET, the role of the United States Department of Energy inmore » PET, and the futures of PET. 22 figs.« less

  17. Absolute Position Encoders With Vertical Image Binning

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2005-01-01

    Improved optoelectronic patternrecognition encoders that measure rotary and linear 1-dimensional positions at conversion rates (numbers of readings per unit time) exceeding 20 kHz have been invented. Heretofore, optoelectronic pattern-recognition absoluteposition encoders have been limited to conversion rates <15 Hz -- too low for emerging industrial applications in which conversion rates ranging from 1 kHz to as much as 100 kHz are required. The high conversion rates of the improved encoders are made possible, in part, by use of vertically compressible or binnable (as described below) scale patterns in combination with modified readout sequences of the image sensors [charge-coupled devices (CCDs)] used to read the scale patterns. The modified readout sequences and the processing of the images thus read out are amenable to implementation by use of modern, high-speed, ultra-compact microprocessors and digital signal processors or field-programmable gate arrays. This combination of improvements makes it possible to greatly increase conversion rates through substantial reductions in all three components of conversion time: exposure time, image-readout time, and image-processing time.

  18. A discriminative structural similarity measure and its application to video-volume registration for endoscope three-dimensional motion tracking.

    PubMed

    Luo, Xiongbiao; Mori, Kensaku

    2014-06-01

    Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.

  19. Effect of Picture Archiving and Communication System Image Manipulation on the Agreement of Chest Radiograph Interpretation in the Neonatal Intensive Care Unit.

    PubMed

    Castro, Denise A; Naqvi, Asad Ahmed; Vandenkerkhof, Elizabeth; Flavin, Michael P; Manson, David; Soboleski, Donald

    2016-01-01

    Variability in image interpretation has been attributed to differences in the interpreters' knowledge base, experience level, and access to the clinical scenario. Picture archiving and communication system (PACS) has allowed the user to manipulate the images while developing their impression of the radiograph. The aim of this study was to determine the agreement of chest radiograph (CXR) impressions among radiologists and neonatologists and help determine the effect of image manipulation with PACS on report impression. Prospective cohort study included 60 patients from the Neonatal Intensive Care Unit undergoing CXRs. Three radiologists and three neonatologists reviewed two consecutive frontal CXRs of each patient. Each physician was allowed manipulation of images as needed to provide a decision of "improved," "unchanged," or "disease progression" lung disease for each patient. Each physician repeated the process once more; this time, they were not allowed to individually manipulate the images, but an independent radiologist presets the image brightness and contrast to best optimize the CXR appearance. Percent agreement and opposing reporting views were calculated between all six physicians for each of the two methods (allowing and not allowing image manipulation). One hundred percent agreement in image impression between all six observers was only seen in 5% of cases when allowing image manipulation; 100% agreement was seen in 13% of the cases when there was no manipulation of the images. Agreement in CXR interpretation is poor; the ability to manipulate the images on PACS results in a decrease in agreement in the interpretation of these studies. New methods to standardize image appearance and allow improved comparison with previous studies should be sought to improve clinician agreement in interpretation consistency and advance patient care.

  20. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  1. An IDEA of What's in the Air

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Automatic Particle Fallout Monitor (APFM) is an automated instrument that assesses real-time particle contamination levels in a facility by directly imaging, sizing, and counting contamination particles. It allows personnel to respond to particle contamination before it becomes a major problem. For NASA, the APFM improves the ability to mitigate, avoid, and explain mission-compromising incidents of contamination occurring during payload processing, launch vehicle ground processing, and potentially, during flight operations. Commercial applications are in semiconductor processing and electronics fabrication, as well as aerospace, aeronautical, and medical industries. The product could also be used to measure the air quality of hotels, apartment complexes, and corporate buildings. IDEA sold and delivered its first four units to the United Space Alliance for the Space Shuttle Program at Kennedy. NASA used the APFM in the Kennedy Space Station Processing Facility to monitor contamination levels during the assembly of International Space Station components.

  2. Nonlinear Real-Time Optical Signal Processing.

    DTIC Science & Technology

    1988-07-01

    Principal Investigator B. K. Jenkins Signal and Image Processing Institute University of Southern California Mail Code 0272 Los Angeles, California...ADDRESS (09% SteW. Mnd ZIP Code ) 10. SOURC OF FUNONG NUMBERS Bldg. 410, Bolling AFB PROGAM CT TASK WORK UNIT Washington, D.C. 20332 EEETP.aso o 11...TAB Unmnnncced Justification By Distribution/ I O’ Availablility Codes I - ’_ ji and/or 2 I Summary During the period 1 July 1987 - 30 June 1988, the

  3. An active seismic experiment at Tenerife Island (Canary Island, Spain): Imaging an active volcano edifice

    NASA Astrophysics Data System (ADS)

    Garcia-Yeguas, A.; Ibañez, J. M.; Rietbrock, A.; Tom-Teidevs, G.

    2008-12-01

    An active seismic experiment to study the internal structure of Teide Volcano was carried out on Tenerife, a volcanic island in Spain's Canary Islands. The main objective of the TOM-TEIDEVS experiment is to obtain a 3-dimensional structural image of Teide Volcano using seismic tomography and seismic reflection/refraction imaging techniques. At present, knowledge of the deeper structure of Teide and Tenerife is very limited, with proposed structural models mainly based on sparse geophysical and geological data. This multinational experiment which involves institutes from Spain, Italy, the United Kingdom, Ireland, and Mexico will generate a unique high resolution structural image of the active volcano edifice and will further our understanding of volcanic processes.

  4. Imaging an Active Volcano Edifice at Tenerife Island, Spain

    NASA Astrophysics Data System (ADS)

    Ibáñez, Jesús M.; Rietbrock, Andreas; García-Yeguas, Araceli

    2008-08-01

    An active seismic experiment to study the internal structure of Teide volcano is being carried out on Tenerife, a volcanic island in Spain's Canary Islands archipelago. The main objective of the Tomography at Teide Volcano Spain (TOM-TEIDEVS) experiment, begun in January 2007, is to obtain a three-dimensional (3-D) structural image of Teide volcano using seismic tomography and seismic reflection/refraction imaging techniques. At present, knowledge of the deeper structure of Teide and Tenerife is very limited, with proposed structural models based mainly on sparse geophysical and geological data. The multinational experiment-involving institutes from Spain, the United Kingdom, Italy, Ireland, and Mexico-will generate a unique high-resolution structural image of the active volcano edifice and will further our understanding of volcanic processes.

  5. Color image enhancement of medical images using alpha-rooting and zonal alpha-rooting methods on 2D QDFT

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; John, Aparna; Agaian, Sos S.

    2017-03-01

    2-D quaternion discrete Fourier transform (2-D QDFT) is the Fourier transform applied to color images when the color images are considered in the quaternion space. The quaternion numbers are four dimensional hyper-complex numbers. Quaternion representation of color image allows us to see the color of the image as a single unit. In quaternion approach of color image enhancement, each color is seen as a vector. This permits us to see the merging effect of the color due to the combination of the primary colors. The color images are used to be processed by applying the respective algorithm onto each channels separately, and then, composing the color image from the processed channels. In this article, the alpha-rooting and zonal alpha-rooting methods are used with the 2-D QDFT. In the alpha-rooting method, the alpha-root of the transformed frequency values of the 2-D QDFT are determined before taking the inverse transform. In the zonal alpha-rooting method, the frequency spectrum of the 2-D QDFT is divided by different zones and the alpha-rooting is applied with different alpha values for different zones. The optimization of the choice of alpha values is done with the genetic algorithm. The visual perception of 3-D medical images is increased by changing the reference gray line.

  6. From high spatial resolution imagery to spatial indicators : Application for hydromorphy follow-up on Bourgneuf wetland

    NASA Astrophysics Data System (ADS)

    Bailly, J. S.; Puech, C.; Lukac, F.; Massé, J.

    2003-04-01

    On Atlantic coastal wetlands, the understanding of hydrological processes may refer to hydraulic surface structures characterization as small ditches or channels networks, permanent and temporary water bodies. Moreover to improve the understanding, this characerization should be realized regarding different seasons and different spatial scales: elementary parcel, managment unit and whole wetland scales. In complement to usual observations on a few local ground points, high spatial resolution remote sensing may be a good information support for extraction and characterization on elementary objects, especially water bodies, permanents or temporary ones and ditches. To carry out a floow-up on wetlands, a seasonal image acquisition rate, reachable from most of satelite systems, is in that case informative for hydrological needs. In this work, georeferencing methods on openfield wetlands have been handled with care in order to use diachronic images or combined geographical data; lack of relief, short vegetation and well structured landscape make this preprocess easier in comparison to other landscape situations. In this presentation we focus on spatial hydromorphy parameters constructed from images with specific processes. Especially, hydromorphy indicators for parcels or managment units have been developped using an IRC winter-spring-summer metric resolution set of images: these descriptors are based on water areas evolution or hydrophyl vegetations presence traducing hydrodynamic submersion behaviour in temporary water bodies. An other example presents a surface water network circulation indicator elaborated on IRC aerial photography combined with vectorized geographic database. This indicator is based on ditches width and vegetation presence : a specific process uses vectorized geo data set to define transects across ditches on which classified image analysis is carried out (supervised classification). These first results proposing hydromorphy descriptors from very high resolution don't give complete indicators for follow-up and monitoring of coastal wetlands, but their combinaison, aggregation should present good technical bases to carry it out with success.

  7. Accelerated speckle imaging with the ATST visible broadband imager

    NASA Astrophysics Data System (ADS)

    Wöger, Friedrich; Ferayorni, Andrew

    2012-09-01

    The Advanced Technology Solar Telescope (ATST), a 4 meter class telescope for observations of the solar atmosphere currently in construction phase, will generate data at rates of the order of 10 TB/day with its state of the art instrumentation. The high-priority ATST Visible Broadband Imager (VBI) instrument alone will create two data streams with a bandwidth of 960 MB/s each. Because of the related data handling issues, these data will be post-processed with speckle interferometry algorithms in near-real time at the telescope using the cost-effective Graphics Processing Unit (GPU) technology that is supported by the ATST Data Handling System. In this contribution, we lay out the VBI-specific approach to its image processing pipeline, put this into the context of the underlying ATST Data Handling System infrastructure, and finally describe the details of how the algorithms were redesigned to exploit data parallelism in the speckle image reconstruction algorithms. An algorithm re-design is often required to efficiently speed up an application using GPU technology; we have chosen NVIDIA's CUDA language as basis for our implementation. We present our preliminary results of the algorithm performance using our test facilities, and base a conservative estimate on the requirements of a full system that could achieve near real-time performance at ATST on these results.

  8. Design of low noise imaging system

    NASA Astrophysics Data System (ADS)

    Hu, Bo; Chen, Xiaolai

    2017-10-01

    In order to meet the needs of engineering applications for low noise imaging system under the mode of global shutter, a complete imaging system is designed based on the SCMOS (Scientific CMOS) image sensor CIS2521F. The paper introduces hardware circuit and software system design. Based on the analysis of key indexes and technologies about the imaging system, the paper makes chips selection and decides SCMOS + FPGA+ DDRII+ Camera Link as processing architecture. Then it introduces the entire system workflow and power supply and distribution unit design. As for the software system, which consists of the SCMOS control module, image acquisition module, data cache control module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The imaging experimental results show that the imaging system exhibits a 2560*2160 pixel resolution, has a maximum frame frequency of 50 fps. The imaging quality of the system satisfies the requirement of the index.

  9. Potential of Near-Infrared Chemical Imaging as Process Analytical Technology Tool for Continuous Freeze-Drying.

    PubMed

    Brouckaert, Davinia; De Meyer, Laurens; Vanbillemont, Brecht; Van Bockstal, Pieter-Jan; Lammens, Joris; Mortier, Séverine; Corver, Jos; Vervaet, Chris; Nopens, Ingmar; De Beer, Thomas

    2018-04-03

    Near-infrared chemical imaging (NIR-CI) is an emerging tool for process monitoring because it combines the chemical selectivity of vibrational spectroscopy with spatial information. Whereas traditional near-infrared spectroscopy is an attractive technique for water content determination and solid-state investigation of lyophilized products, chemical imaging opens up possibilities for assessing the homogeneity of these critical quality attributes (CQAs) throughout the entire product. In this contribution, we aim to evaluate NIR-CI as a process analytical technology (PAT) tool for at-line inspection of continuously freeze-dried pharmaceutical unit doses based on spin freezing. The chemical images of freeze-dried mannitol samples were resolved via multivariate curve resolution, allowing us to visualize the distribution of mannitol solid forms throughout the entire cake. Second, a mannitol-sucrose formulation was lyophilized with variable drying times for inducing changes in water content. Analyzing the corresponding chemical images via principal component analysis, vial-to-vial variations as well as within-vial inhomogeneity in water content could be detected. Furthermore, a partial least-squares regression model was constructed for quantifying the water content in each pixel of the chemical images. It was hence concluded that NIR-CI is inherently a most promising PAT tool for continuously monitoring freeze-dried samples. Although some practicalities are still to be solved, this analytical technique could be applied in-line for CQA evaluation and for detecting the drying end point.

  10. Portable sequential multicolor thermal imager based on a MCT 384 x 288 focal plane array

    NASA Astrophysics Data System (ADS)

    Breiter, Rainer; Cabanski, Wolfgang A.; Mauk, Karl-Heinz; Rode, Werner; Ziegler, Johann

    2001-10-01

    AIM has developed a sequential multicolor thermal imager to provide customers with a test system to realize real-time spectral selective thermal imaging. In contrast to existing PC based laboratory units, the system is miniaturized with integrated signal processing like non-uniformity correction and post processing functions such as image subtraction of different colors to allow field tests in military applications like detection of missile plumes or camouflaged targets as well as commercial applications like detection of chemical agents, pollution control, etc. The detection module used is a 384 X 288 mercury cadmium telluride (MCT) focal plane array (FPA) available in the mid wave (MWIR) or long wave spectral band LWIR). A compact command and control electronics (CCE) provides clock and voltage supply for the detector as well as 14 bit deep digital conversion of the analog detector output. A continuous rotating wheel with four facets for filters provides spectral selectivity. The customer can choose between various types of filter characteristics, e.g. a 4.2 micrometer bandpass filter for CO2 detection in the MWIR band. The rotating wheel can be synchronized to an external source giving the rotation speed, typical 25 l/s. A position sensor generates the four frame start signals for synchronous operation of the detector -- 100 Hz framerate for the four frames per rotation. The rotating wheel is exchangeable for different configurations and also plates for a microscanner operation to improve geometrical resolution are available instead of a multicolor operation. AIM's programmable MVIP image processing unit is used for signal processing like non- uniformity correction and controlling the detector parameters. The MVIP allows to output the four subsequent images as four quarters of the video screen to prior to any observation task set the integration time for each color individually for comparable performance in each spectral color and after that also to determine separate NUC coefficients for each filter position. This procedure allows to really evaluate the pay off of spectral selectivity in the IR. The display part of the MVIP allows linear look up tables (LUT) for dynamic reduction as well as histogram equalization for automatic LUT optimization. Parallel to the video output a digital interface is provided for digital recording of the 14 bit corrected detector data. The architecture of the thermal imager with its components is presented in this paper together with some aspects on multicolor thermal imaging.

  11. Towards real-time image deconvolution: application to confocal and STED microscopy

    PubMed Central

    Zanella, R.; Zanghirati, G.; Cavicchioli, R.; Zanni, L.; Boccacci, P.; Bertero, M.; Vicidomini, G.

    2013-01-01

    Although deconvolution can improve the quality of any type of microscope, the high computational time required has so far limited its massive spreading. Here we demonstrate the ability of the scaled-gradient-projection (SGP) method to provide accelerated versions of the most used algorithms in microscopy. To achieve further increases in efficiency, we also consider implementations on graphic processing units (GPUs). We test the proposed algorithms both on synthetic and real data of confocal and STED microscopy. Combining the SGP method with the GPU implementation we achieve a speed-up factor from about a factor 25 to 690 (with respect the conventional algorithm). The excellent results obtained on STED microscopy images demonstrate the synergy between super-resolution techniques and image-deconvolution. Further, the real-time processing allows conserving one of the most important property of STED microscopy, i.e the ability to provide fast sub-diffraction resolution recordings. PMID:23982127

  12. Optimization of the segmented method for optical compression and multiplexing system

    NASA Astrophysics Data System (ADS)

    Al Falou, Ayman

    2002-05-01

    Because of the constant increasing demands of images exchange, and despite the ever increasing bandwidth of the networks, compression and multiplexing of images is becoming inseparable from their generation and display. For high resolution real time motion pictures, electronic performing of compression requires complex and time-consuming processing units. On the contrary, by its inherent bi-dimensional character, coherent optics is well fitted to perform such processes that are basically bi-dimensional data handling in the Fourier domain. Additionally, the main limiting factor that was the maximum frame rate is vanishing because of the recent improvement of spatial light modulator technology. The purpose of this communication is to benefit from recent optical correlation algorithms. The segmented filtering used to store multi-references in a given space bandwidth product optical filter can be applied to networks to compress and multiplex images in a given bandwidth channel.

  13. Three dimensional single molecule localization using a phase retrieved pupilfunction

    PubMed Central

    Liu, Sheng; Kromann, Emil B.; Krueger, Wesley D.; Bewersdorf, Joerg; Lidke, Keith A.

    2013-01-01

    Localization-based superresolution imaging is dependent on finding the positions of individualfluorophores in a sample by fitting the observed single-molecule intensity pattern to the microscopepoint spread function (PSF). For three-dimensional imaging, system-specific aberrations of theoptical system can lead to inaccurate localizations when the PSF model does not account for theseaberrations. Here we describe the use of phase-retrieved pupil functions to generate a more accuratePSF and therefore more accurate 3D localizations. The complex-valued pupil function containsinformation about the system-specific aberrations and can thus be used to generate the PSF forarbitrary defocus. Further, it can be modified to include depth dependent aberrations. We describethe phase retrieval process, the method for including depth dependent aberrations, and a fastfitting algorithm using graphics processing units. The superior localization accuracy of the pupilfunction generated PSF is demonstrated with dual focal plane 3D superresolution imaging ofbiological structures. PMID:24514501

  14. Division in a Binary Representation for Complex Numbers

    ERIC Educational Resources Information Center

    Blest, David C.; Jamil, Tariq

    2003-01-01

    Computer operations involving complex numbers, essential in such applications as Fourier transforms or image processing, are normally performed in a "divide-and-conquer" approach dealing separately with real and imaginary parts. A number of proposals have treated complex numbers as a single unit but all have foundered on the problem of the…

  15. Okayama optical polarimetry and spectroscopy system (OOPS) II. Network-transparent control software.

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Kurakami, T.; Shimizu, Y.; Yutani, M.

    Control system of the OOPS (Okayama Optical Polarimetry and Spectroscopy system) is designed to integrate several instruments whose controllers are distributed over a network; the OOPS instrument, a CCD camera and data acquisition unit, the 91 cm telescope, an autoguider, a weather monitor, and an image display tool SAOimage. With the help of message-based communication, the control processes cooperate with related processes to perform an astronomical observation under supervising control by a scheduler process. A logger process collects status data of all the instruments to distribute them to related processes upon request. Software structure of each process is described.

  16. AutoCellSeg: robust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques.

    PubMed

    Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert

    2018-05-08

    In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.

  17. Rock type discrimination techniques using Landsat and Seasat image data

    NASA Technical Reports Server (NTRS)

    Blom, R.; Abrams, M.; Conrad, C.

    1981-01-01

    Results of a sedimentary rock type discrimination project using Seasat radar and Landsat multispectral image data of the San Rafael Swell, in eastern Utah, are presented, which has the goal of determining the potential contribution of radar image data to Landsat image data for rock type discrimination, particularly when the images are coregistered. The procedure employs several images processing techniques using the Landsat and Seasat data independently, and then both data sets are coregistered. The images are evaluated according to the ease with which contacts can be located and rock units (not just stratigraphically adjacent ones) separated. Results show that of the Landsat images evaluated, the image using a supervised classification scheme is the best for sedimentary rock type discrimination. Of less value, in decreasing order, are color ratio composites, principal components, and the standard color composite. In addition, for rock type discrimination, the black and white Seasat image is less useful than any of the Landsat color images by itself. However, it is found that the incorporation of the surface textural measures made from the Seasat image provides a considerable and worthwhile improvement in rock type discrimination.

  18. Geometric Calibration of Full Spherical Panoramic Ricoh-Theta Camera

    NASA Astrophysics Data System (ADS)

    Aghayari, S.; Saadatseresht, M.; Omidalizarandi, M.; Neumann, I.

    2017-05-01

    A novel calibration process of RICOH-THETA, full-view fisheye camera, is proposed which has numerous applications as a low cost sensor in different disciplines such as photogrammetry, robotic and machine vision and so on. Ricoh Company developed this camera in 2014 that consists of two lenses and is able to capture the whole surrounding environment in one shot. In this research, each lens is calibrated separately and interior/relative orientation parameters (IOPs and ROPs) of the camera are determined on the basis of designed calibration network on the central and side images captured by the aforementioned lenses. Accordingly, designed calibration network is considered as a free distortion grid and applied to the measured control points in the image space as correction terms by means of bilinear interpolation. By performing corresponding corrections, image coordinates are transformed to the unit sphere as an intermediate space between object space and image space in the form of spherical coordinates. Afterwards, IOPs and EOPs of each lens are determined separately through statistical bundle adjustment procedure based on collinearity condition equations. Subsequently, ROPs of two lenses is computed from both EOPs. Our experiments show that by applying 3*3 free distortion grid, image measurements residuals diminish from 1.5 to 0.25 degrees on aforementioned unit sphere.

  19. Fractal dimension of trabecular bone projection texture is related to three-dimensional microarchitecture.

    PubMed

    Pothuaud, L; Benhamou, C L; Porion, P; Lespessailles, E; Harba, R; Levitz, P

    2000-04-01

    The purpose of this work was to understand how fractal dimension of two-dimensional (2D) trabecular bone projection images could be related to three-dimensional (3D) trabecular bone properties such as porosity or connectivity. Two alteration processes were applied to trabecular bone images obtained by magnetic resonance imaging: a trabeculae dilation process and a trabeculae removal process. The trabeculae dilation process was applied from the 3D skeleton graph to the 3D initial structure with constant connectivity. The trabeculae removal process was applied from the initial structure to an altered structure having 99% of porosity, in which both porosity and connectivity were modified during this second process. Gray-level projection images of each of the altered structures were simply obtained by summation of voxels, and fractal dimension (Df) was calculated. Porosity (phi) and connectivity per unit volume (Cv) were calculated from the 3D structure. Significant relationships were found between Df, phi, and Cv. Df values increased when porosity increased (dilation and removal processes) and when connectivity decreased (only removal process). These variations were in accordance with all previous clinical studies, suggesting that fractal evaluation of trabecular bone projection has real meaning in terms of porosity and connectivity of the 3D architecture. Furthermore, there was a statistically significant linear dependence between Df and Cv when phi remained constant. Porosity is directly related to bone mineral density and fractal dimension can be easily evaluated in clinical routine. These two parameters could be associated to evaluate the connectivity of the structure.

  20. Effects of a proposed quality improvement process in the proportion of the reported ultrasound findings unsupported by stored images.

    PubMed

    Schenone, Mauro; Ziebarth, Sarah; Duncan, Jose; Stokes, Lea; Hernandez, Angela

    2018-02-05

    To investigate the proportion of documented ultrasound findings that were unsupported by stored ultrasound images in the obstetric ultrasound unit, before and after the implementation of a quality improvement process consisting of a checklist and feedback. A quality improvement process was created involving utilization of a checklist and feedback from physician to sonographer. The feedback was based on findings of the physician's review of the report and images using a check list. To assess the impact of this process, two groups were compared. Group 1 consisted of 58 ultrasound reports created prior to initiation of the process. Group 2 included 65 ultrasound reports created after process implementation. Each chart was reviewed by a physician and a sonographer. Findings considered unsupported by stored images by both reviewers were used for analysis, and the proportion of unsupported findings was compared between the two groups. Results are expressed as mean ± standard error. A p value of < .05 was used to determine statistical significance. Univariate analysis of baseline characteristics and potential confounders showed no statistically significant difference between the groups. The mean proportion of unsupported findings in Group 1 was 5.1 ± 0.87, with Group 2 having a significantly lower proportion (2.6 ± 0.62) (p value = .018). Results suggest a significant decrease in the proportion of unsupported findings in ultrasound reports after quality improvement process implementation. Thus, we present a simple yet effective quality improvement process to reduce unsupported ultrasound findings.

  1. Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy

    NASA Astrophysics Data System (ADS)

    Ford, Tim N.; Mertz, Jerome

    2013-06-01

    Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.

  2. Using Cell-ID 1.4 with R for Microscope-Based Cytometry

    PubMed Central

    Bush, Alan; Chernomoretz, Ariel; Yu, Richard; Gordon, Andrew

    2012-01-01

    This unit describes a method for quantifying various cellular features (e.g., volume, total and subcellular fluorescence localization) from sets of microscope images of individual cells. It includes procedures for tracking cells over time. One purposefully defocused transmission image (sometimes referred to as bright-field or BF) is acquired to segment the image and locate each cell. Fluorescent images (one for each of the color channels to be analyzed) are then acquired by conventional wide-field epifluorescence or confocal microscopy. This method uses the image processing capabilities of Cell-ID (Gordon et al., 2007, as updated here) and data analysis by the statistical programming framework R (R-Development-Team, 2008), which we have supplemented with a package of routines for analyzing Cell-ID output. Both Cell-ID and the analysis package are open-source. PMID:23026908

  3. Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy.

    PubMed

    Ford, Tim N; Mertz, Jerome

    2013-06-01

    Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.

  4. Development of living body information monitoring system

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hidetoshi; Ohbuchi, Yoshifumi; Torigoe, Ippei; Miyagawa, Hidekazu; Murayama, Nobuki; Hayashida, Yuki; Igasaki, Tomohiko

    2010-03-01

    The easy monitoring systems of contact and non-contact living body information for preventing the the Sudden Infant Death Syndrome (SIDS) were proposed as an alternative monitoring system of the infant's vital information. As for the contact monitoring system, respiration sensor, ECG electrodes, thermistor and IC signal processor were integrated into babies' nappy holder. This contact-monitoring unit has RF transmission function and the obtained data are analyzed in real time by PC. In non-contact mortaring system, the infrared thermo camera was used. The surrounding of the infant's mouth and nose is monitored and the respiration rate is obtained by thermal image processing of its temperature change image of expired air. This proposed system of in-sleep infant's vital information monitoring system and unit are very effective as not only infant's condition monitoring but also nursing person's one.

  5. Development of position measurement unit for flying inertial fusion energy target

    NASA Astrophysics Data System (ADS)

    Tsuji, R.; Endo, T.; Yoshida, H.; Norimatsu, T.

    2016-03-01

    We have reported the present status in the development of a position measurement unit (PMU) for a flying inertial fusion energy (IFE) target. The PMU, which uses Arago spot phenomena, is designed to have a measurement accuracy smaller than 1 μm. By employing divergent, pulsed orthogonal laser beam illumination, we can measure the time and the target position at the pulsed illumination. The two-dimensional Arago spot image is compressed into one-dimensional image by a cylindrical lens for real-time processing. The PMU are set along the injection path of the flying target. The local positions of the target in each PMU are transferred to the controller and analysed to calculate the target trajectory. Two methods are presented to calculate the arrival time and the arrival position of the target at the reactor centre.

  6. Development of living body information monitoring system

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hidetoshi; Ohbuchi, Yoshifumi; Torigoe, Ippei; Miyagawa, Hidekazu; Murayama, Nobuki; Hayashida, Yuki; Igasaki, Tomohiko

    2009-12-01

    The easy monitoring systems of contact and non-contact living body information for preventing the the Sudden Infant Death Syndrome (SIDS) were proposed as an alternative monitoring system of the infant's vital information. As for the contact monitoring system, respiration sensor, ECG electrodes, thermistor and IC signal processor were integrated into babies' nappy holder. This contact-monitoring unit has RF transmission function and the obtained data are analyzed in real time by PC. In non-contact mortaring system, the infrared thermo camera was used. The surrounding of the infant's mouth and nose is monitored and the respiration rate is obtained by thermal image processing of its temperature change image of expired air. This proposed system of in-sleep infant's vital information monitoring system and unit are very effective as not only infant's condition monitoring but also nursing person's one.

  7. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  8. Computer-aided boundary delineation of agricultural lands

    NASA Technical Reports Server (NTRS)

    Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt

    1989-01-01

    The National Agricultural Statistics Service of the United States Department of Agriculture (USDA) presently uses labor-intensive aerial photographic interpretation techniques to divide large geographical areas into manageable-sized units for estimating domestic crop and livestock production. Prototype software, the computer-aided stratification (CAS) system, was developed to automate the procedure, and currently runs on a Sun-based image processing system. With a background display of LANDSAT Thematic Mapper and United States Geological Survey Digital Line Graph data, the operator uses a cursor to delineate agricultural areas, called sampling units, which are assigned to strata of land-use and land-cover types. The resultant stratified sampling units are used as input into subsequent USDA sampling procedures. As a test, three counties in Missouri were chosen for application of the CAS procedures. Subsequent analysis indicates that CAS was five times faster in creating sampling units than the manual techniques were.

  9. Ultra-compact swept-source optical coherence tomography handheld probe with motorized focus adjustment (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    LaRocca, Francesco; Nankivil, Derek; Keller, Brenton; Farsiu, Sina; Izatt, Joseph A.

    2017-02-01

    Handheld optical coherence tomography (OCT) systems facilitate imaging of young children, bedridden subjects, and those with less stable fixation. Smaller and lighter OCT probes allow for more efficient imaging and reduced operator fatigue, which is critical for prolonged use in either the operating room or neonatal intensive care unit. In addition to size and weight, the imaging speed, image quality, field of view, resolution, and focus correction capability are critical parameters that determine the clinical utility of a handheld probe. Here, we describe an ultra-compact swept source (SS) OCT handheld probe weighing only 211 g (half the weight of the next lightest handheld SSOCT probe in the literature) with 20.1 µm lateral resolution, 7 µm axial resolution, 102 dB peak sensitivity, a 27° x 23° field of view, and motorized focus adjustment for refraction correction between -10 to +16 D. A 2D microelectromechanical systems (MEMS) scanner, a converging beam-at-scanner telescope configuration, and an optical design employing 6 different custom optics were used to minimize device size and weight while achieving diffraction limited performance throughout the system's field of view. Custom graphics processing unit (GPU)-accelerated software was used to provide real-time display of OCT B-scans and volumes. Retinal images were acquired from adult volunteers to demonstrate imaging performance.

  10. Multispectral imaging system for contaminant detection

    NASA Technical Reports Server (NTRS)

    Poole, Gavin H. (Inventor)

    2003-01-01

    An automated inspection system for detecting digestive contaminants on food items as they are being processed for consumption includes a conveyor for transporting the food items, a light sealed enclosure which surrounds a portion of the conveyor, with a light source and a multispectral or hyperspectral digital imaging camera disposed within the enclosure. Operation of the conveyor, light source and camera are controlled by a central computer unit. Light reflected by the food items within the enclosure is detected in predetermined wavelength bands, and detected intensity values are analyzed to detect the presence of digestive contamination.

  11. Low-frequency Radio Observatory on the Lunar Surface (LROLS)

    NASA Astrophysics Data System (ADS)

    MacDowall, Robert; Network for Exploration and Space Science (NESS)

    2018-06-01

    A radio observatory on the lunar surface will provide the capability to image solar radio bursts and other sources. Radio burst imaging will improve understanding of radio burst mechanisms, particle acceleration, and space weather. Low-frequency observations (less than ~20 MHz) must be made from space, because lower frequencies are blocked by Earth’s ionosphere. Solar radio observations do not mandate an observatory on the farside of the Moon, although such a location would permit study of less intense solar bursts because the Moon occults the terrestrial radio frequency interference. The components of the lunar radio observatory array are: the antenna system consisting of 10 – 100 antennas distributed over a square kilometer or more; the system to transfer the radio signals from the antennas to the central processing unit; electronics to digitize the signals and possibly to calculate correlations; storage for the data until it is down-linked to Earth. Such transmission requires amplification and a high-gain antenna system or possibly laser comm. For observatories on the lunar farside a satellite or other intermediate transfer system is required to direct the signal to Earth. On the ground, the aperture synthesis analysis is completed to display the radio image as a function of time. Other requirements for lunar surface systems include the power supply, utilizing solar arrays with batteries to maintain the system at adequate thermal levels during the lunar night. An alternative would be a radioisotope thermoelectric generator requiring less mass. The individual antennas might be designed with their own solar arrays and electronics to transmit data to the central processing unit, but surviving lunar night would be a challenge. Harnesses for power and data transfer from the central processing unit to the antennas are an alternative, but a harness-based system complicates deployment. The concept of placing the antennas and harnesses on rolls of polyimide and rolling them out may be a solution for solar radio observations, but it probably does not provide a sufficiently-uniform beam for other science targets.

  12. Satellite Image Mosaic Engine

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2006-01-01

    A computer program automatically builds large, full-resolution mosaics of multispectral images of Earth landmasses from images acquired by Landsat 7, complete with matching of colors and blending between adjacent scenes. While the code has been used extensively for Landsat, it could also be used for other data sources. A single mosaic of as many as 8,000 scenes, represented by more than 5 terabytes of data and the largest set produced in this work, demonstrated what the code could do to provide global coverage. The program first statistically analyzes input images to determine areas of coverage and data-value distributions. It then transforms the input images from their original universal transverse Mercator coordinates to other geographical coordinates, with scaling. It applies a first-order polynomial brightness correction to each band in each scene. It uses a data-mask image for selecting data and blending of input scenes. Under control by a user, the program can be made to operate on small parts of the output image space, with check-point and restart capabilities. The program runs on SGI IRIX computers. It is capable of parallel processing using shared-memory code, large memories, and tens of central processing units. It can retrieve input data and store output data at locations remote from the processors on which it is executed.

  13. Development of a prototype chest digital tomosynthesis (CDT) R/F system with fast image reconstruction using graphics processing unit (GPU) programming

    NASA Astrophysics Data System (ADS)

    Choi, Sunghoon; Lee, Seungwan; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Seo, Chang-Woo; Kim, Hee-Joung

    2017-03-01

    Digital tomosynthesis offers the advantage of low radiation doses compared to conventional computed tomography (CT) by utilizing small numbers of projections ( 80) acquired over a limited angular range. It produces 3D volumetric data, although there are artifacts due to incomplete sampling. Based upon these characteristics, we developed a prototype digital tomosynthesis R/F system for applications in chest imaging. Our prototype chest digital tomosynthesis (CDT) R/F system contains an X-ray tube with high power R/F pulse generator, flat-panel detector, R/F table, electromechanical radiographic subsystems including a precise motor controller, and a reconstruction server. For image reconstruction, users select between analytic and iterative reconstruction methods. Our reconstructed images of Catphan700 and LUNGMAN phantoms clearly and rapidly described the internal structures of phantoms using graphics processing unit (GPU) programming. Contrast-to-noise ratio (CNR) values of the CTP682 module of Catphan700 were higher in images using a simultaneous algebraic reconstruction technique (SART) than in those using filtered back-projection (FBP) for all materials by factors of 2.60, 3.78, 5.50, 2.30, 3.70, and 2.52 for air, lung foam, low density polyethylene (LDPE), Delrin® (acetal homopolymer resin), bone 50% (hydroxyapatite), and Teflon, respectively. Total elapsed times for producing 3D volume were 2.92 s and 86.29 s on average for FBP and SART (20 iterations), respectively. The times required for reconstruction were clinically feasible. Moreover, the total radiation dose from our system (5.68 mGy) was lower than that of conventional chest CT scan. Consequently, our prototype tomosynthesis R/F system represents an important advance in digital tomosynthesis applications.

  14. DKIST visible broadband imager data processing pipeline

    NASA Astrophysics Data System (ADS)

    Beard, Andrew; Cowan, Bruce; Ferayorni, Andrew

    2014-07-01

    The Daniel K. Inouye Solar Telescope (DKIST) Data Handling System (DHS) provides the technical framework and building blocks for developing on-summit instrument quality assurance and data reduction pipelines. The DKIST Visible Broadband Imager (VBI) is a first light instrument that alone will create two data streams with a bandwidth of 960 MB/s each. The high data rate and data volume of the VBI require near-real time processing capability for quality assurance and data reduction, and will be performed on-summit using Graphics Processing Unit (GPU) technology. The VBI data processing pipeline (DPP) is the first designed and developed using the DKIST DHS components, and therefore provides insight into the strengths and weaknesses of the framework. In this paper we lay out the design of the VBI DPP, examine how the underlying DKIST DHS components are utilized, and discuss how integration of the DHS framework with GPUs was accomplished. We present our results of the VBI DPP alpha release implementation of the calibration, frame selection reduction, and quality assurance display processing nodes.

  15. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos; Pina, Robert

    2005-05-17

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  16. Communication in science.

    PubMed

    Deda, H; Yakupoglu, H

    2002-01-01

    Science must have a common language. For centuries, Latin language carried out this job, but the progress in computer technology and internet world through the last 20 years, began to produce a new language with the new century; the computer language. The information masses, which need data language standardization, are the followings; Digital libraries and medical education systems, Consumer health informatics, Medical education systems, World Wide Web Applications, Database systems, Medical language processing, Automatic indexing systems, Image processing units, Telemedicine, New Generation Internet (NGI).

  17. Automated image processing of LANDSAT 2 digital data for watershed runoff prediction

    NASA Technical Reports Server (NTRS)

    Sasso, R. R.; Jensen, J. R.; Estes, J. E.

    1977-01-01

    The U.S. Soil Conservation Service (SCS) model for watershed runoff prediction uses soil and land cover information as its major drivers. Kern County Water Agency is implementing the SCS model to predict runoff for 10,400 sq cm of mountainous watershed in Kern County, California. The Remote Sensing Unit, University of California, Santa Barbara, was commissioned by KCWA to conduct a 230 sq cm feasibility study in the Lake Isabella, California region to evaluate remote sensing methodologies which could be ultimately extrapolated to the entire 10,400 sq cm Kern County watershed. Digital results indicate that digital image processing of Landsat 2 data will provide usable land cover required by KCWA for input to the SCS runoff model.

  18. Counterfeit Electronics Detection Using Image Processing and Machine Learning

    NASA Astrophysics Data System (ADS)

    Asadizanjani, Navid; Tehranipoor, Mark; Forte, Domenic

    2017-01-01

    Counterfeiting is an increasing concern for businesses and governments as greater numbers of counterfeit integrated circuits (IC) infiltrate the global market. There is an ongoing effort in experimental and national labs inside the United States to detect and prevent such counterfeits in the most efficient time period. However, there is still a missing piece to automatically detect and properly keep record of detected counterfeit ICs. Here, we introduce a web application database that allows users to share previous examples of counterfeits through an online database and to obtain statistics regarding the prevalence of known defects. We also investigate automated techniques based on image processing and machine learning to detect different physical defects and to determine whether or not an IC is counterfeit.

  19. Microfabricated ommatidia using a laser induced self-writing process for high resolution artificial compound eye optical systems.

    PubMed

    Jung, Hyukjin; Jeong, Ki-Hun

    2009-08-17

    A microfabricated compound eye, comparable to a natural compound eye shows a spherical arrangement of integrated optical units called artificial ommatidia. Each consists of a self-aligned microlens and waveguide. The increase of waveguide length is imperative to obtain high resolution images through an artificial compound eye for wide field-of - view imaging as well as fast motion detection. This work presents an effective method for increasing the waveguide length of artificial ommatidium using a laser induced self-writing process in a photosensitive polymer resin. The numerical and experimental results show the uniform formation of waveguides and the increment of waveguide length over 850 microm. (c) 2009 Optical Society of America

  20. Volumetric Forest Change Detection Through Vhr Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Akca, Devrim; Stylianidis, Efstratios; Smagas, Konstantinos; Hofer, Martin; Poli, Daniela; Gruen, Armin; Sanchez Martin, Victor; Altan, Orhan; Walli, Andreas; Jimeno, Elisa; Garcia, Alejandro

    2016-06-01

    Quick and economical ways of detecting of planimetric and volumetric changes of forest areas are in high demand. A research platform, called FORSAT (A satellite processing platform for high resolution forest assessment), was developed for the extraction of 3D geometric information from VHR (very-high resolution) imagery from satellite optical sensors and automatic change detection. This 3D forest information solution was developed during a Eurostars project. FORSAT includes two main units. The first one is dedicated to the geometric and radiometric processing of satellite optical imagery and 2D/3D information extraction. This includes: image radiometric pre-processing, image and ground point measurement, improvement of geometric sensor orientation, quasiepipolar image generation for stereo measurements, digital surface model (DSM) extraction by using a precise and robust image matching approach specially designed for VHR satellite imagery, generation of orthoimages, and 3D measurements in single images using mono-plotting and in stereo images as well as triplets. FORSAT supports most of the VHR optically imagery commonly used for civil applications: IKONOS, OrbView - 3, SPOT - 5 HRS, SPOT - 5 HRG, QuickBird, GeoEye-1, WorldView-1/2, Pléiades 1A/1B, SPOT 6/7, and sensors of similar type to be expected in the future. The second unit of FORSAT is dedicated to 3D surface comparison for change detection. It allows users to import digital elevation models (DEMs), align them using an advanced 3D surface matching approach and calculate the 3D differences and volume changes between epochs. To this end our 3D surface matching method LS3D is being used. FORSAT is a single source and flexible forest information solution with a very competitive price/quality ratio, allowing expert and non-expert remote sensing users to monitor forests in three and four dimensions from VHR optical imagery for many forest information needs. The capacity and benefits of FORSAT have been tested in six case studies located in Austria, Cyprus, Spain, Switzerland and Turkey, using optical data from different sensors and with the purpose to monitor forest with different geometric characteristics. The validation run on Cyprus dataset is reported and commented.

  1. Planet Four: Terrains - Discovery of araneiforms outside of the South Polar layered deposits

    NASA Astrophysics Data System (ADS)

    Schwamb, Megan E.; Aye, Klaus-Michael; Portyankina, Ganna; Hansen, Candice J.; Allen, Campbell; Allen, Sarah; Calef, Fred J.; Duca, Simone; McMaster, Adam; Miller, Grant R. M.

    2018-07-01

    We present the results of a systematic mapping of seasonally sculpted terrains on the South Polar region of Mars with the Planet Four: Terrains (P4T) online citizen science project. P4T enlists members of the general public to visually identify features in the publicly released Mars Reconnaissance Orbiter Context Camera (CTX) images. In particular, P4T volunteers are asked to identify: (1) araneiforms (including features with a central pit and radiating channels known as 'spiders'); (2) erosional depressions, troughs, mesas, ridges, and quasi-circular pits characteristic of the South Polar Residual Cap (SPRC) which we collectively refer to as 'Swiss cheese terrain', and (3) craters. In this work we present the distributions of our high confidence classic spider araneiforms and Swiss cheese terrain identifications in 90 CTX images covering 11% of the South polar regions at latitudes ≤ -75° N. We find no locations within our high confidence spider sample that also have confident Swiss cheese terrain identifications. Previously spiders were reported as being confined to the South Polar Layered Deposits (SPLD). Our work has provided the first identification of spiders at locations outside of the SPLD, confirmed with high resolution HiRISE (High Resolution Imaging Science Experiment) imaging. We find araneiforms on the Amazonian and Hesperian polar units and the Early Noachian highland units, with 75% of the identified araneiform locations in our high confidence sample residing on the SPLD. With our current coverage, we cannot confirm whether these are the only geologic units conducive to araneiform formation on the Martian South Polar region. Our results are consistent with the current CO2 jet formation scenario with the process exploiting weaknesses in the surface below the seasonal CO2 ice sheet to carve araneiform channels into the regolith over many seasons. These new regions serve as additional probes of the conditions required for channel creation in the CO2 jet process.

  2. Active laser radar (lidar) for measurement of corresponding height and reflectance images

    NASA Astrophysics Data System (ADS)

    Froehlich, Christoph; Mettenleiter, M.; Haertl, F.

    1997-08-01

    For the survey and inspection of environmental objects, a non-tactile, robust and precise imaging of height and depth is the basis sensor technology. For visual inspection,surface classification, and documentation purposes, however, additional information concerning reflectance of measured objects is necessary. High-speed acquisition of both geometric and visual information is achieved by means of an active laser radar, supporting consistent 3D height and 2D reflectance images. The laser radar is an optical-wavelength system, and is comparable to devices built by ERIM, Odetics, and Perceptron, measuring the range between sensor and target surfaces as well as the reflectance of the target surface, which corresponds to the magnitude of the back scattered laser energy. In contrast to these range sensing devices, the laser radar under consideration is designed for high speed and precise operation in both indoor and outdoor environments, emitting a minimum of near-IR laser energy. It integrates a laser range measurement system and a mechanical deflection system for 3D environmental measurements. This paper reports on design details of the laser radar for surface inspection tasks. It outlines the performance requirements and introduces the measurement principle. The hardware design, including the main modules, such as the laser head, the high frequency unit, the laser beam deflection system, and the digital signal processing unit are discussed.the signal processing unit consists of dedicated signal processors for real-time sensor data preprocessing as well as a sensor computer for high-level image analysis and feature extraction. The paper focuses on performance data of the system, including noise, drift over time, precision, and accuracy with measurements. It discuses the influences of ambient light, surface material of the target, and ambient temperature for range accuracy and range precision. Furthermore, experimental results from inspection of buildings, monuments and industrial environments are presented. The paper concludes by summarizing results achieved in industrial environments and gives a short outlook to future work.

  3. Real-time high-velocity resolution color Doppler OCT

    NASA Astrophysics Data System (ADS)

    Westphal, Volker; Yazdanfar, Siavash; Rollins, Andrew M.; Izatt, Joseph A.

    2001-05-01

    Color Doppler optical coherence tomography (CDOCT), also called Optical Doppler Tomography) is a noninvasive optical imaging technique, which allows for micron-scale physiological flow mapping simultaneous with morphological OCT imaging. Current systems for real-time endoscopic optical coherence tomography (EOCT) would be enhanced by the capability to visualize sub-surface blood flow for applications in early cancer diagnosis and the management of bleeding ulcers. Unfortunately, previous implementations of CDOCT have either been sufficiently computationally expensive (employing Fourier or Hilbert transform techniques) to rule out real-time imaging of flow, or have been restricted to imaging of excessively high flow velocities when used in real time. We have developed a novel Doppler OCT signal-processing strategy capable of imaging physiological flow rates in real time. This strategy employs cross-correlation processing of sequential A-scans in an EOCT image, as opposed to autocorrelation processing as described previously. To measure Doppler shifts in the kHz range using this technique, it was necessary to stabilize the EOCT interferometer center frequency, eliminate parasitic phase noise, and to construct a digital cross correlation unit able to correlate signals of megahertz bandwidth by a fixed lag of up to a few ms. The performance of the color Doppler OCT system was demonstrated in a flow phantom, demonstrating a minimum detectable flow velocity of ~0.8 mm/s at a data acquisition rate of 8 images/second (with 480 A-scans/image) using a handheld probe. Dynamic flow as well as using it freehanded was shown. Flow was also detectable in a phantom in combination with a clinical usable endoscopic probe.

  4. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  5. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    PubMed

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  6. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  7. A fast image registration approach of neural activities in light-sheet fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Meng, Hui; Hui, Hui; Hu, Chaoen; Yang, Xin; Tian, Jie

    2017-03-01

    The ability of fast and single-neuron resolution imaging of neural activities enables light-sheet fluorescence microscopy (LSFM) as a powerful imaging technique in functional neural connection applications. The state-of-art LSFM imaging system can record the neuronal activities of entire brain for small animal, such as zebrafish or C. elegans at single-neuron resolution. However, the stimulated and spontaneous movements in animal brain result in inconsistent neuron positions during recording process. It is time consuming to register the acquired large-scale images with conventional method. In this work, we address the problem of fast registration of neural positions in stacks of LSFM images. This is necessary to register brain structures and activities. To achieve fast registration of neural activities, we present a rigid registration architecture by implementation of Graphics Processing Unit (GPU). In this approach, the image stacks were preprocessed on GPU by mean stretching to reduce the computation effort. The present image was registered to the previous image stack that considered as reference. A fast Fourier transform (FFT) algorithm was used for calculating the shift of the image stack. The calculations for image registration were performed in different threads while the preparation functionality was refactored and called only once by the master thread. We implemented our registration algorithm on NVIDIA Quadro K4200 GPU under Compute Unified Device Architecture (CUDA) programming environment. The experimental results showed that the registration computation can speed-up to 550ms for a full high-resolution brain image. Our approach also has potential to be used for other dynamic image registrations in biomedical applications.

  8. Intensity Modulated Radiation Treatment of Prostate Cancer Guided by High Field MR Spectroscopic Imaging

    DTIC Science & Technology

    2006-05-01

    d). (e) In the histogram analysis eld units are observed initially for voxels located on the d to 250 Hounsfield units.ses (a) el the tration...CT10, CT20, and CT30. Histogram ximum difference of 250 Hounsfield units . Only 0.01% d units.d imag ts a mand finite-element model. The fluid flow...cause Hounsfield unit calibration problems. While this does not seem to influence the image registration, the use of CBCT for dose calculation should

  9. Image acquisition system for traffic monitoring applications

    NASA Astrophysics Data System (ADS)

    Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben

    1995-03-01

    An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.

  10. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    PubMed Central

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe

    2017-01-01

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788

  11. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    PubMed

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  12. 2dx_automator: implementation of a semiautomatic high-throughput high-resolution cryo-electron crystallography pipeline.

    PubMed

    Scherer, Sebastian; Kowal, Julia; Chami, Mohamed; Dandey, Venkata; Arheit, Marcel; Ringler, Philippe; Stahlberg, Henning

    2014-05-01

    The introduction of direct electron detectors (DED) to cryo-electron microscopy has tremendously increased the signal-to-noise ratio (SNR) and quality of the recorded images. We discuss the optimal use of DEDs for cryo-electron crystallography, introduce a new automatic image processing pipeline, and demonstrate the vast improvement in the resolution achieved by the use of both together, especially for highly tilted samples. The new processing pipeline (now included in the software package 2dx) exploits the high SNR and frame readout frequency of DEDs to automatically correct for beam-induced sample movement, and reliably processes individual crystal images without human interaction as data are being acquired. A new graphical user interface (GUI) condenses all information required for quality assessment in one window, allowing the imaging conditions to be verified and adjusted during the data collection session. With this new pipeline an automatically generated unit cell projection map of each recorded 2D crystal is available less than 5 min after the image was recorded. The entire processing procedure yielded a three-dimensional reconstruction of the 2D-crystallized ion-channel membrane protein MloK1 with a much-improved resolution of 5Å in-plane and 7Å in the z-direction, within 2 days of data acquisition and simultaneous processing. The results obtained are superior to those delivered by conventional photographic film-based methodology of the same sample, and demonstrate the importance of drift-correction. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Image security based on iterative random phase encoding in expanded fractional Fourier transform domains

    NASA Astrophysics Data System (ADS)

    Liu, Zhengjun; Chen, Hang; Blondel, Walter; Shen, Zhenmin; Liu, Shutian

    2018-06-01

    A novel image encryption method is proposed by using the expanded fractional Fourier transform, which is implemented with a pair of lenses. Here the centers of two lenses are separated at the cross section of axis in optical system. The encryption system is addressed with Fresnel diffraction and phase modulation for the calculation of information transmission. The iterative process with the transform unit is utilized for hiding secret image. The structure parameters of a battery of lenses can be used for additional keys. The performance of encryption method is analyzed theoretically and digitally. The results show that the security of this algorithm is enhanced markedly by the added keys.

  14. Observing vegetation phenology through social media.

    PubMed

    Silva, Sam J; Barbieri, Lindsay K; Thomer, Andrea K

    2018-01-01

    The widespread use of social media has created a valuable but underused source of data for the environmental sciences. We demonstrate the potential for images posted to the website Twitter to capture variability in vegetation phenology across United States National Parks. We process a subset of images posted to Twitter within eight U.S. National Parks, with the aim of understanding the amount of green vegetation in each image. Analysis of the relative greenness of the images show statistically significant seasonal cycles across most National Parks at the 95% confidence level, consistent with springtime green-up and fall senescence. Additionally, these social media-derived greenness indices correlate with monthly mean satellite NDVI (r = 0.62), reinforcing the potential value these data could provide in constraining models and observing regions with limited high quality scientific monitoring.

  15. Lithologic mapping of the Mordor, NT, Australia ultramafic complex by using the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER)

    USGS Publications Warehouse

    Rowan, L.C.; Mars, J.C.; Simpson, C.J.

    2005-01-01

    Spectral measurements made in the Mordor Pound, NT, Australia study area using the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), in the laboratory and in situ show dominantly Al-OH and ferric-iron VNIR-SWIR absorption features in felsic rock spectra and ferrous-iron and Fe,Mg-OH features in the mafic-ultramafic rock spectra. ASTER ratio images, matched-filter, and spectral-angle mapper processing (SAM) were evaluated for mapping the lithologies. Matched-filter processing in which VNIR + SWIR image spectra were used for reference resulted in 4 felsic classes and 4 mafic-ultramafic classes based on Al-OH or Fe,Mg-OH absorption features and, in some, subtle reflectance differences related to differential weathering and vegetation. These results were similar to those obtained by match-filter analysis of HyMap data from a previous study, but the units were more clearly demarcated in the HyMap image. ASTER TIR spectral emittance data and laboratory emissivity measurements document a wide wavelength range of Si-O spectral features, which reflect the lithological diversity of the Mordor ultramafic complex and adjacent rocks. SAM processing of the spectral emittance data distinguished 2 classes representing the mafic-ultramafic rocks and 4 classes comprising the quartzose to intermediate composition rocks. Utilization of the complementary attributes of the spectral reflectance and spectral emittance data resulted in discrimination of 4 mafic-ultramafic categories; 3 categories of alluvial-colluvial deposits; and a significantly more completely mapped quartzite unit than could be accomplished by using either data set alone. ?? 2005 Elsevier Inc. All rights reserved.

  16. Sentinel 2: implementation of the means and methods for the CAL/VAL commissioning phase

    NASA Astrophysics Data System (ADS)

    Trémas, Thierry L.; Déchoz, Cécile; Lacherade, Sophie; Nosavan, Julien; Petrucci, Beatrice; Martimort, P.; Isola, Claudia

    2013-10-01

    In partnership with the European Commission and in the frame of the Copernicus program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. Sentinel-2 will offer a unique combination of global coverage with a wide field of view (290km), a high revisit (5 days with two satellites), a high resolution (10m, 20m and 60m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). The first satellite is planned to be launched in late 2014. In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to insure the cal/val commissioning phase. This paper provides first, an overview of the Sentinel-2 system and the image products delivered by the ground processing. Then the paper will present the ground segment, presently under preparation at CNES, and the various devices that compose it : the GPP in charge of producing the level 1 files, the "radiometric unit" that processes sensitivity parameters, the "geometric unit" in charge of fitting the images on a reference map, MACCS that will produce Level 2A files (computing reflectances at the Bottom of Atmosphere) and the TEC-S2 that will coordinate all the previous software and drive a database in which will be gather the incoming Level 0 files and the processed Level 1 files.

  17. Playback system designed for X-Band SAR

    NASA Astrophysics Data System (ADS)

    Yuquan, Liu; Changyong, Dou

    2014-03-01

    SAR(Synthetic Aperture Radar) has extensive application because it is daylight and weather independent. In particular, X-Band SAR strip map, designed by Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, provides high ground resolution images, at the same time it has a large spatial coverage and a short acquisition time, so it is promising in multi-applications. When sudden disaster comes, the emergency situation acquires radar signal data and image as soon as possible, in order to take action to reduce loss and save lives in the first time. This paper summarizes a type of X-Band SAR playback processing system designed for disaster response and scientific needs. It describes SAR data workflow includes the payload data transmission and reception process. Playback processing system completes signal analysis on the original data, providing SAR level 0 products and quick image. Gigabit network promises radar signal transmission efficiency from recorder to calculation unit. Multi-thread parallel computing and ping pong operation can ensure computation speed. Through gigabit network, multi-thread parallel computing and ping pong operation, high speed data transmission and processing meet the SAR radar data playback real time requirement.

  18. Radiographic trends of dental offices and dental schools.

    PubMed

    Suleiman, O H; Spelic, D C; Conway, B; Hart, J C; Boyce, P R; Antonsen, R G

    1999-07-01

    A survey of private practice facilities in the United States that perform dental radiography was conducted in 1993 and repeated in dental schools in 1995-1996. Both surveys were conducted as part of the Nationwide Evaluation of X-ray Trends, or NEXT, survey program. A representative sample of dental facilities from each participating state were surveyed, and data on patient radiation exposure, radiographic technique, film-image quality, film-processing quality and darkroom fog were collected. The authors found that dental schools use E-speed film more frequently than do private practice facilities. The use of E-speed film and better film processing by dental schools resulted in lower patient radiation exposures without sacrificing image quality. The authors also found that dental school darkrooms had lower ambient fog levels than did those of private practice facilities. The distribution for the 1993 NEXT survey facilities was greater than that observed for dental schools for radiation exposure, film-processing quality and darkroom fog. Dental schools, in general, had better film quality and lower radiation exposures than did private practice facilities. Facilities need to emphasize better quality processing and the use of E-speed film to reduce patient exposure and improve image quality.

  19. A new scale for the assessment of conjunctival bulbar redness.

    PubMed

    Macchi, Ilaria; Bunya, Vatinee Y; Massaro-Giordano, Mina; Stone, Richard A; Maguire, Maureen G; Zheng, Yuanjie; Chen, Min; Gee, James; Smith, Eli; Daniel, Ebenezer

    2018-06-05

    Current scales for assessment of bulbar conjunctival redness have limitations for evaluating digital images. We developed a scale suited for evaluating digital images and compared it to the Validated Bulbar Redness (VBR) scale. From a digital image database of 4889 color corrected bulbar conjunctival images, we identified 20 images with varied degrees of redness. These images, ten each of nasal and temporal views, constitute the Digital Bulbar Redness (DBR) scale. The chromaticity of these images was assessed with an established image processing algorithm. Using 100 unique, randomly selected images from the database, three trained, non-physician graders applied the DBR scale and printed VBR scale. Agreement was assessed with weighted Kappa statistics (K w ). The DBR scale scores provide linear increments of 10 from 10-100 when redness is measured objectively with an established image processing algorithm. Exact agreement of all graders was 38% and agreement with no more than a difference of ten units between graders was 91%. K w for agreement between any two graders ranged from 0.57 to 0.73 for the DBR scale and from 0.38 to 0.66 for the VBR scale. The DBR scale allowed direct comparison of digital to digital images, could be used in dim lighting, had both temporal and nasal conjunctival reference images, and permitted viewing reference and test images at the same magnification. The novel DBR scale, with its objective linear chromatic steps, demonstrated improved reproducibility, fewer visualization artifacts and improved ease of use over the VBR scale for assessing conjunctival redness. Copyright © 2018. Published by Elsevier Inc.

  20. The Development of a Semiotic of Film.

    ERIC Educational Resources Information Center

    Worth, Sol

    The process of film making can be thought of as beginning with a person's feeling or concern about something. To communicate this feeling to others, the film maker (sender) must develop an organic unit which will provide a vehicle that can embody the feeling/Story-Organism. The film maker selects and orders a series of signs, images, or events…

  1. Magnetic resonance imaging in Mexico

    NASA Astrophysics Data System (ADS)

    Rodriguez, A. O.; Rojas, R.; Barrios, F. A.

    2001-10-01

    MR imaging has experienced an important growth worldwide and in particular in the USA and Japan. This imaging technique has also shown an important rise in the number of MR imagers in Mexico. However, the development of MRI has followed a typical way of Latin American countries, which is very different from the path shown in the industrialised countries. Despite the fact that Mexico was one the very first countries to install and operate MR imagers in the world, it still lacks of qualified clinical and technical personnel. Since the first MR scanner started to operate, the number of units has grown at a moderate space that now sums up approximately 60 system installed nationwide. Nevertheless, there are no official records of the number of MR units operating, physicians and technicians involved in this imaging modality. The MRI market is dominated by two important companies: General Electric (approximately 51%) and Siemens (approximately 17.5%), the rest is shared by other five companies. According to the field intensity, medium-field systems (0.5 Tesla) represent 60% while a further 35% are 1.0 T or higher. Almost all of these units are in private hospitals and clinics: there is no high-field MR imagers in any public hospital. Because the political changes in the country, a new public plan for health care is still in the process and will be published soon this year. This plan will be determined by the new Congress. North American Free Trade Agreement (NAFTA) and president Fox. Experience acquired in the past shows that the demand for qualified professionals will grow in the new future. Therefore, systematic training of clinical and technical professionals will be in high demand to meet the needs of this technique. The National University (UNAM) and the Metropolitan University (UAM-Iztapalapa) are collaborating with diverse clinical groups in private facilities to create a systematic training program and carry out research and development in MRI

  2. Real-Time Digital Bright Field Technology for Rapid Antibiotic Susceptibility Testing.

    PubMed

    Canali, Chiara; Spillum, Erik; Valvik, Martin; Agersnap, Niels; Olesen, Tom

    2018-01-01

    Optical scanning through bacterial samples and image-based analysis may provide a robust method for bacterial identification, fast estimation of growth rates and their modulation due to the presence of antimicrobial agents. Here, we describe an automated digital, time-lapse, bright field imaging system (oCelloScope, BioSense Solutions ApS, Farum, Denmark) for rapid and higher throughput antibiotic susceptibility testing (AST) of up to 96 bacteria-antibiotic combinations at a time. The imaging system consists of a digital camera, an illumination unit and a lens where the optical axis is tilted 6.25° relative to the horizontal plane of the stage. Such tilting grants more freedom of operation at both high and low concentrations of microorganisms. When considering a bacterial suspension in a microwell, the oCelloScope acquires a sequence of 6.25°-tilted images to form an image Z-stack. The stack contains the best-focus image, as well as the adjacent out-of-focus images (which contain progressively more out-of-focus bacteria, the further the distance from the best-focus position). The acquisition process is repeated over time, so that the time-lapse sequence of best-focus images is used to generate a video. The setting of the experiment, image analysis and generation of time-lapse videos can be performed through a dedicated software (UniExplorer, BioSense Solutions ApS). The acquired images can be processed for online and offline quantification of several morphological parameters, microbial growth, and inhibition over time.

  3. Recalculation of dose for each fraction of treatment on TomoTherapy.

    PubMed

    Thomas, Simon J; Romanchikova, Marina; Harrison, Karl; Parker, Michael A; Bates, Amy M; Scaife, Jessica E; Sutcliffe, Michael P F; Burnet, Neil G

    2016-01-01

    The VoxTox study, linking delivered dose to toxicity requires recalculation of typically 20-37 fractions per patient, for nearly 2000 patients. This requires a non-interactive interface permitting batch calculation with multiple computers. Data are extracted from the TomoTherapy(®) archive and processed using the computational task-management system GANGA. Doses are calculated for each fraction of radiotherapy using the daily megavoltage (MV) CT images. The calculated dose cube is saved as a digital imaging and communications in medicine RTDOSE object, which can then be read by utilities that calculate dose-volume histograms or dose surface maps. The rectum is delineated on daily MV images using an implementation of the Chan-Vese algorithm. On a cluster of up to 117 central processing units, dose cubes for all fractions of 151 patients took 12 days to calculate. Outlining the rectum on all slices and fractions on 151 patients took 7 h. We also present results of the Hounsfield unit (HU) calibration of TomoTherapy MV images, measured over an 8-year period, showing that the HU calibration has become less variable over time, with no large changes observed after 2011. We have developed a system for automatic dose recalculation of TomoTherapy dose distributions. This does not tie up the clinically needed planning system but can be run on a cluster of independent machines, enabling recalculation of delivered dose without user intervention. The use of a task management system for automation of dose calculation and outlining enables work to be scaled up to the level required for large studies.

  4. Onboard data processing and compression for a four-sensor suite: the SERENA experiment.

    NASA Astrophysics Data System (ADS)

    Mura, A.; Orsini, S.; Di Lellis, A.; Lazzarotto, F.; Barabash, S.; Livi, S.; Torkar, K.; Milillo, A.; De Angelis, E.

    2013-09-01

    SERENA (Search for Exospheric Refilling and Emitted Natural Abundances) is an instrument package that will fly on board the BepiColombo/Mercury Planetary Orbiter (MPO). SERENA instrument includes four units: ELENA (Emitted Low Energy Neutral Atoms), a neutral particle analyzer/imager to detect ion sputtering and backscattering from Mercury's surface; STROFIO (Start from a Rotating FIeld mass spectrometer), a mass spectrometer to identify atomic masses released from the surface; MIPA (Miniature Ion Precipitation Analyzer) and PICAM (Planetary Ion Camera), two ion spectrometers to monitor the precipitating solar wind and measure the plasma environment around Mercury. The System Control Unit architecture is such that all four sensors are connected to a high resolution FPGA, which dialogs with a dedicated high-performance data processing unit. The unpredictability of the data rate, due to the peculiarities of these investigations, leads to several possible scenarios for the data compression and handling. In this study we first discuss about the predicted data volume that comes from the optimized operation strategy, and then we report on the instrument data processing and compression.

  5. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    PubMed

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  6. Signs of Soft-Sediment Deformation at 'Slickrock'

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Geological examination of bedding textures indicates three stratigraphic units in an area called 'Slickrock' located in the martian rock outcrop that NASA's Opportunity examined for several weeks. This is an image Opportunity took from a distance of 2.1 meters (6.9 feet) during the rover's 45th sol on Mars (March 10, 2004) and shows a scour surface or ripple trough lamination. These features are consistent with sedimentation on a moist surface where wind-driven processes may also have occurred.

    [figure removed for brevity, see original site] Figure 1

    In Figure 1, interpretive blue lines indicate boundaries between the units. The upper blue line may coincide with a scour surface. The lower and upper units have features suggestive of ripples or early soft-sediment deformation. The central unit is dominated by fine, parallel stratification, which could have been produced by wind-blown ripples.

    [figure removed for brevity, see original site] Figure 2

    In Figure 2, features labeled with red letters are shown in an enlargement of portions of the image. 'A' is a scour surface characterized by truncation of the underlying fine layers, or laminae. 'B' is a possible soft-sediment buckling characterized by a 'teepee' shaped structure. 'C' shows a possible ripple beneath the arrow and a possible ripple cross-lamination to the left of the arrow, along the surface the arrow tip touches. 'D' is a scour surface or ripple trough lamination. These features are consistent with sedimentation on a moist surface where wind-driven processes may also have occurred.

  7. Geology of the Icy Galilean Satellites: Understanding Crustal Processes and Geologic Histories Through the JIMO Mission

    NASA Technical Reports Server (NTRS)

    Figueredo, P. H.; Tanaka, K.; Senske, D.; Greeley, R.

    2003-01-01

    Knowledge of the geology, style and time history of crustal processes on the icy Galilean satellites is necessary to understanding how these bodies formed and evolved. Data from the Galileo mission have provided a basis for detailed geologic and geo- physical analysis. Due to constrained downlink, Galileo Solid State Imaging (SSI) data consisted of global coverage at a -1 km/pixel ground sampling and representative, widely spaced regional maps at -200 m/pixel. These two data sets provide a general means to extrapolate units identified at higher resolution to lower resolution data. A sampling of key sites at much higher resolution (10s of m/pixel) allows evaluation of processes on local scales. We are currently producing the first global geological map of Europa using Galileo global and regional-scale data. This work is demonstrating the necessity and utility of planet-wide contiguous image coverage at global, regional, and local scales.

  8. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  9. Histogram-driven cupping correction (HDCC) in CT

    NASA Astrophysics Data System (ADS)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  10. Impact of atmospheric correction and image filtering on hyperspectral classification of tree species using support vector machine

    NASA Astrophysics Data System (ADS)

    Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko

    2015-01-01

    Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.

  11. The research of knitting needle status monitoring setup

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Liao, Xiao-qing; Zhu, Yong-kang; Yang, Wei; Zhang, Pei; Zhao, Yong-kai; Huang, Hui-jie

    2013-09-01

    In textile production, quality control and testing is the key to ensure the process and improve the efficiency. Defect of the knitting needles is the main factor affecting the quality of the appearance of textiles. Defect detection method based on machine vision and image processing technology is universal. This approach does not effectively identify the defect generated by damaged knitting needles and raise the alarm. We developed a knitting needle status monitoring setup using optical imaging, photoelectric detection and weak signal processing technology to achieve real-time monitoring of weaving needles' position. Depending on the shape of the knitting needle, we designed a kind of Glass Optical Fiber (GOF) light guides with a rectangular port used for transmission of the signal light. To be able to capture the signal of knitting needles accurately, we adopt a optical 4F system which has better imaging quality and simple structure and there is a rectangle image on the focal plane after the system. When a knitting needle passes through position of the rectangle image, the reflected light from needle surface will back to the GOF light guides along the same optical system. According to the intensity of signals, the computer control unit distinguish that the knitting needle is broken or curving. The experimental results show that this system can accurately detect the broken needles and the curving needles on the knitting machine in operating condition.

  12. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  13. 4D megahertz optical coherence tomography (OCT): imaging and live display beyond 1 gigavoxel/sec (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Huber, Robert A.; Draxinger, Wolfgang; Wieser, Wolfgang; Kolb, Jan Philip; Pfeiffer, Tom; Karpf, Sebastian N.; Eibl, Matthias; Klein, Thomas

    2016-03-01

    Over the last 20 years, optical coherence tomography (OCT) has become a valuable diagnostic tool in ophthalmology with several 10,000 devices sold today. Other applications, like intravascular OCT in cardiology and gastro-intestinal imaging will follow. OCT provides 3-dimensional image data with microscopic resolution of biological tissue in vivo. In most applications, off-line processing of the acquired OCT-data is sufficient. However, for OCT applications like OCT aided surgical microscopes, for functional OCT imaging of tissue after a stimulus, or for interactive endoscopy an OCT engine capable of acquiring, processing and displaying large and high quality 3D OCT data sets at video rate is highly desired. We developed such a prototype OCT engine and demonstrate live OCT with 25 volumes per second at a size of 320x320x320 pixels. The computer processing load of more than 1.5 TFLOPS was handled by a GTX 690 graphics processing unit with more than 3000 stream processors operating in parallel. In the talk, we will describe the optics and electronics hardware as well as the software of the system in detail and analyze current limitations. The talk also focuses on new OCT applications, where such a system improves diagnosis and monitoring of medical procedures. The additional acquisition of hyperspectral stimulated Raman signals with the system will be discussed.

  14. The CUBLAS and CULA based GPU acceleration of adaptive finite element framework for bioluminescence tomography.

    PubMed

    Zhang, Bo; Yang, Xiang; Yang, Fei; Yang, Xin; Qin, Chenghu; Han, Dong; Ma, Xibo; Liu, Kai; Tian, Jie

    2010-09-13

    In molecular imaging (MI), especially the optical molecular imaging, bioluminescence tomography (BLT) emerges as an effective imaging modality for small animal imaging. The finite element methods (FEMs), especially the adaptive finite element (AFE) framework, play an important role in BLT. The processing speed of the FEMs and the AFE framework still needs to be improved, although the multi-thread CPU technology and the multi CPU technology have already been applied. In this paper, we for the first time introduce a new kind of acceleration technology to accelerate the AFE framework for BLT, using the graphics processing unit (GPU). Besides the processing speed, the GPU technology can get a balance between the cost and performance. The CUBLAS and CULA are two main important and powerful libraries for programming on NVIDIA GPUs. With the help of CUBLAS and CULA, it is easy to code on NVIDIA GPU and there is no need to worry about the details about the hardware environment of a specific GPU. The numerical experiments are designed to show the necessity, effect and application of the proposed CUBLAS and CULA based GPU acceleration. From the results of the experiments, we can reach the conclusion that the proposed CUBLAS and CULA based GPU acceleration method can improve the processing speed of the AFE framework very much while getting a balance between cost and performance.

  15. Combining High Spatial Resolution Optical and LIDAR Data for Object-Based Image Classification

    NASA Astrophysics Data System (ADS)

    Li, R.; Zhang, T.; Geng, R.; Wang, L.

    2018-04-01

    In order to classify high spatial resolution images more accurately, in this research, a hierarchical rule-based object-based classification framework was developed based on a high-resolution image with airborne Light Detection and Ranging (LiDAR) data. The eCognition software is employed to conduct the whole process. In detail, firstly, the FBSP optimizer (Fuzzy-based Segmentation Parameter) is used to obtain the optimal scale parameters for different land cover types. Then, using the segmented regions as basic units, the classification rules for various land cover types are established according to the spectral, morphological and texture features extracted from the optical images, and the height feature from LiDAR respectively. Thirdly, the object classification results are evaluated by using the confusion matrix, overall accuracy and Kappa coefficients. As a result, a method using the combination of an aerial image and the airborne Lidar data shows higher accuracy.

  16. Rendering-based video-CT registration with physical constraints for image-guided endoscopic sinus surgery

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.

    2015-03-01

    We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.

  17. Automatic classification of spectral units in the Aristarchus plateau

    NASA Astrophysics Data System (ADS)

    Erard, S.; Le Mouelic, S.; Langevin, Y.

    1999-09-01

    A reduction scheme has been recently proposed for the NIR images of Clementine (Le Mouelic et al, JGR 1999). This reduction has been used to build an integrated UVvis-NIR image cube of the Aristarchus region, from which compositional and maturity variations can be studied (Pinet et al, LPSC 1999). We will present an analysis of this image cube, providing a classification in spectral types and spectral units. The image cube is processed with Gmode analysis using three different data sets: Normalized spectra provide a classification based mainly on spectral slope variations (ie. maturity and volcanic glasses). This analysis discriminates between craters plus ejecta, mare basalts, and DMD. Olivine-rich areas and Aristarchus central peak are also recognized. Continuum-removed spectra provide a classification more related to compositional variations, which correctly identifies olivine and pyroxenes-rich areas (in Aristarchus, Krieger, Schiaparelli\\ldots). A third analysis uses spectral parameters related to maturity and Fe composition (reflectance, 1 mu m band depth, and spectral slope) rather than intensities. It provides the most spatially consistent picture, but fails in detecting Vallis Schroeteri and DMDs. A supplementary unit, younger and rich in pyroxene, is found on Aristarchus south rim. In conclusion, Gmode analysis can discriminate between different spectral types already identified with more classic methods (PCA, linear mixing\\ldots). No previous assumption is made on the data structure, such as endmembers number and nature, or linear relationship between input variables. The variability of the spectral types is intrinsically accounted for, so that the level of analysis is always restricted to meaningful limits. A complete classification should integrate several analyses based on different sets of parameters. Gmode is therefore a powerful light toll to perform first look analysis of spectral imaging data. This research has been partly founded by the French Programme National de Planetologie.

  18. Shakeout: A New Approach to Regularized Deep Neural Network Training.

    PubMed

    Kang, Guoliang; Li, Jun; Tao, Dacheng

    2018-05-01

    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.

  19. CMOS image sensor with contour enhancement

    NASA Astrophysics Data System (ADS)

    Meng, Liya; Lai, Xiaofeng; Chen, Kun; Yuan, Xianghui

    2010-10-01

    Imitating the signal acquisition and processing of vertebrate retina, a CMOS image sensor with bionic pre-processing circuit is designed. Integration of signal-process circuit on-chip can reduce the requirement of bandwidth and precision of the subsequent interface circuit, and simplify the design of the computer-vision system. This signal pre-processing circuit consists of adaptive photoreceptor, spatial filtering resistive network and Op-Amp calculation circuit. The adaptive photoreceptor unit with a dynamic range of approximately 100 dB has a good self-adaptability for the transient changes in light intensity instead of intensity level itself. Spatial low-pass filtering resistive network used to mimic the function of horizontal cell, is composed of the horizontal resistor (HRES) circuit and OTA (Operational Transconductance Amplifier) circuit. HRES circuit, imitating dendrite of the neuron cell, comprises of two series MOS transistors operated in weak inversion region. Appending two diode-connected n-channel transistors to a simple transconductance amplifier forms the OTA Op-Amp circuit, which provides stable bias voltage for the gate of MOS transistors in HRES circuit, while serves as an OTA voltage follower to provide input voltage for the network nodes. The Op-Amp calculation circuit with a simple two-stage Op-Amp achieves the image contour enhancing. By adjusting the bias voltage of the resistive network, the smoothing effect can be tuned to change the effect of image's contour enhancement. Simulations of cell circuit and 16×16 2D circuit array are implemented using CSMC 0.5μm DPTM CMOS process.

  20. VerifEYE: a real-time meat inspection system for the beef processing industry

    NASA Astrophysics Data System (ADS)

    Kocak, Donna M.; Caimi, Frank M.; Flick, Rick L.; Elharti, Abdelmoula

    2003-02-01

    Described is a real-time meat inspection system developed for the beef processing industry by eMerge Interactive. Designed to detect and localize trace amounts of contamination on cattle carcasses in the packing process, the system affords the beef industry an accurate, high speed, passive optical method of inspection. Using a method patented by United States Department of Agriculture and Iowa State University, the system takes advantage of fluorescing chlorophyll found in the animal's diet and therefore the digestive track to allow detection and imaging of contaminated areas that may harbor potentially dangerous microbial pathogens. Featuring real-time image processing and documentation of performance, the system can be easily integrated into a processing facility's Hazard Analysis and Critical Control Point quality assurance program. This paper describes the VerifEYE carcass inspection and removal verification system. Results indicating the feasibility of the method, as well as field data collected using a prototype system during four university trials conducted in 2001 are presented. Two successful demonstrations using the prototype system were held at a major U.S. meat processing facility in early 2002.

  1. A new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Ma, Yayun; Han, Shaokun; Wang, Yulin; Liu, Fei; Zhai, Yu

    2018-06-01

    One of the most important goals of research on three-dimensional nonscanning laser imaging systems is the improvement of the illumination system. In this paper, a new three-dimensional nonscanning laser imaging system based on the illumination pattern of a point-light-source array is proposed. This array is obtained using a fiber array connected to a laser array with each unit laser having independent control circuits. This system uses a point-to-point imaging process, which is realized using the exact corresponding optical relationship between the point-light-source array and a linear-mode avalanche photodiode array detector. The complete working process of this system is explained in detail, and the mathematical model of this system containing four equations is established. A simulated contrast experiment and two real contrast experiments which use the simplified setup without a laser array are performed. The final results demonstrate that unlike a conventional three-dimensional nonscanning laser imaging system, the proposed system meets all the requirements of an eligible illumination system. Finally, the imaging performance of this system is analyzed under defocusing situations, and analytical results show that the system has good defocusing robustness and can be easily adjusted in real applications.

  2. Smartphone Spectrometers

    PubMed Central

    Willmott, Jon R.; Mims, Forrest M.; Parisi, Alfio V.

    2018-01-01

    Smartphones are playing an increasing role in the sciences, owing to the ubiquitous proliferation of these devices, their relatively low cost, increasing processing power and their suitability for integrated data acquisition and processing in a ‘lab in a phone’ capacity. There is furthermore the potential to deploy these units as nodes within Internet of Things architectures, enabling massive networked data capture. Hitherto, considerable attention has been focused on imaging applications of these devices. However, within just the last few years, another possibility has emerged: to use smartphones as a means of capturing spectra, mostly by coupling various classes of fore-optics to these units with data capture achieved using the smartphone camera. These highly novel approaches have the potential to become widely adopted across a broad range of scientific e.g., biomedical, chemical and agricultural application areas. In this review, we detail the exciting recent development of smartphone spectrometer hardware, in addition to covering applications to which these units have been deployed, hitherto. The paper also points forward to the potentially highly influential impacts that such units could have on the sciences in the coming decades. PMID:29342899

  3. A Complete OCR System for Tamil Magazine Documents

    NASA Astrophysics Data System (ADS)

    Kokku, Aparna; Chakravarthy, Srinivasa

    We present a complete optical character recognition (OCR) system for Tamil magazines/documents. All the standard elements of OCR process like de-skewing, preprocessing, segmentation, character recognition, and reconstruction are implemented. Experience with OCR problems teaches that for most subtasks of OCR, there is no single technique that gives perfect results for every type of document image. We exploit the ability of neural networks to learn from experience in solving the problems of segmentation and character recognition. Text segmentation of Tamil newsprint poses a new challenge owing to its italic-like font type; problems that arise in recognition of touching and close characters are discussed. Character recognition efficiency varied from 94 to 97% for this type of font. The grouping of blocks into logical units and the determination of reading order within each logical unit helped us in reconstructing automatically the document image in an editable format.

  4. Exogenic and endogenic albedo and color patterns on Europa

    NASA Technical Reports Server (NTRS)

    Mcewen, A. S.

    1986-01-01

    New global and high-resolution multispectral mosaics of Europa have been produced from the Voyager imaging data. Photometric normalizations are based on multiple-image techniques that explicitly account for intrinsic albedo variations through pixel-by-pixel solutions. The exogenic color and albedo pattern on Europa is described by a second-order function of the cosine of the angular distance from the apex of orbital motion. On the basis of this second-order function and of color trends that are different on the leading and trailing hemispheres, the exogenic pattern is interpreted as being due to equilibrium between two dominant processes: (1) impact gardening and (2) magnetospheric interactions, including sulfur-ion implantation and sputtering redistribution. Removal of the model exogenic pattern in the mosaics reveals the endogenic variations, consisting of only two major units: darker (redder) and bright materials. Therefore Europa's visual spectral reflectivity is simple, having one continuous exogenic pattern and two discrete endogenic units.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckley, L; Lambert, C; Nyiri, B

    Purpose: To standardize the tube calibration for Elekta XVI cone beam CT (CBCT) systems in order to provide a meaningful estimate of the daily imaging dose and reduce the variation between units in a large centre with multiple treatment units. Methods: Initial measurements of the output from the CBCT systems were made using a Farmer chamber and standard CTDI phantom. The correlation between the measured CTDI and the tube current was confirmed using an Unfors Xi detector which was then used to perform a tube current calibration on each unit. Results: Initial measurements showed measured tube current variations of upmore » to 25% between units for scans with the same image settings. In order to reasonably estimate the imaging dose, a systematic approach to x-ray generator calibration was adopted to ensure that the imaging dose was consistent across all units at the centre and was adopted as part of the routine quality assurance program. Subsequent measurements show that the variation in measured dose across nine units is on the order of 5%. Conclusion: Increasingly, patients receiving radiation therapy have extended life expectancies and therefore the cumulative dose from daily imaging should not be ignored. In theory, an estimate of imaging dose can be made from the imaging parameters. However, measurements have shown that there are large differences in the x-ray generator calibration as installed at the clinic. Current protocols recommend routine checks of dose to ensure constancy. The present study suggests that in addition to constancy checks on a single machine, a tube current calibration should be performed on every unit to ensure agreement across multiple machines. This is crucial at a large centre with multiple units in order to provide physicians with a meaningful estimate of the daily imaging dose.« less

  6. Data Processing for the Space-Based Desis Hyperspectral Sensor

    NASA Astrophysics Data System (ADS)

    Carmona, E.; Avbelj, J.; Alonso, K.; Bachmann, M.; Cerra, D.; Eckardt, A.; Gerasch, B.; Graham, L.; Günther, B.; Heiden, U.; Kerr, G.; Knodt, U.; Krutz, D.; Krawcyk, H.; Makarau, A.; Miller, R.; Müller, R.; Perkins, R.; Walter, I.

    2017-05-01

    The German Aerospace Center (DLR) and Teledyne Brown Engineering (TBE) have established a collaboration to develop and operate a new space-based hyperspectral sensor, the DLR Earth Sensing Imaging Spectrometer (DESIS). DESIS will provide spacebased hyperspectral data in the VNIR with high spectral resolution and near-global coverage. While TBE provides the platform and infrastructure for operation of the DESIS instrument on the International Space Station, DLR is responsible for providing the instrument and the processing software. The DESIS instrument is equipped with novel characteristics for an imaging spectrometer such high spectral resolution (2.55 nm), a mirror pointing unit or a CMOS sensor operated in rolling shutter mode. We present here an overview of the DESIS instrument and its processing chain, emphasizing the effect of the novel characteristics of DESIS in the data processing and final data products. Furthermore, we analyse in more detail the effect of the rolling shutter on the DESIS data and possible mitigation/correction strategies.

  7. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  8. Enriching student concept images: Teaching and learning fractions through a multiple-embodiment approach

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofen; Clements, M. A. (Ken); Ellerton, Nerida F.

    2015-06-01

    This study investigated how fifth-grade children's concept images of the unit fractions represented by the symbols , , and changed as a result of their participation in an instructional intervention based on multiple embodiments of fraction concepts. The participants' concept images were examined through pre- and post-teaching written questions and pre- and post-teaching one-to-one verbal interview questions. Results showed that at the pre-teaching stage, the student concept images of unit fractions were very narrow and mainly linked to area models. However, after the instructional intervention, the fifth graders were able to select and apply a variety of models in response to unit fraction tasks, and their concept images of unit fractions were enriched and linked to capacity, perimeter, linear and discrete models, as well as to area models. Their performances on tests had improved, and their conceptual understandings of unit fractions had developed.

  9. Real-time two-dimensional temperature imaging using ultrasound.

    PubMed

    Liu, Dalong; Ebbini, Emad S

    2009-01-01

    We present a system for real-time 2D imaging of temperature change in tissue media using pulse-echo ultrasound. The frontend of the system is a SonixRP ultrasound scanner with a research interface giving us the capability of controlling the beam sequence and accessing radio frequency (RF) data in real-time. The beamformed RF data is streamlined to the backend of the system, where the data is processed using a two-dimensional temperature estimation algorithm running in the graphics processing unit (GPU). The estimated temperature is displayed in real-time providing feedback that can be used for real-time control of the heating source. Currently we have verified our system with elastography tissue mimicking phantom and in vitro porcine heart tissue, excellent repeatability and sensitivity were demonstrated.

  10. Microscope-integrated optical coherence tomography for image-aided positioning of glaucoma surgery

    NASA Astrophysics Data System (ADS)

    Li, Xiqi; Wei, Ling; Dong, Xuechuan; Huang, Ping; Zhang, Chun; He, Yi; Shi, Guohua; Zhang, Yudong

    2015-07-01

    Most glaucoma surgeries involve creating new aqueous outflow pathways with the use of a small surgical instrument. This article reported a microscope-integrated, real-time, high-speed, swept-source optical coherence tomography system (SS-OCT) with a 1310-nm light source for glaucoma surgery. A special mechanism was designed to produce an adjustable system suitable for use in surgery. A two-graphic processing unit architecture was used to speed up the data processing and real-time volumetric rendering. The position of the surgical instrument can be monitored and measured using the microscope and a grid-inserted image of the SS-OCT. Finally, experiments were simulated to assess the effectiveness of this integrated system. Experimental results show that this system is a suitable positioning tool for glaucoma surgery.

  11. Efficient and automatic image reduction framework for space debris detection based on GPU technology

    NASA Astrophysics Data System (ADS)

    Diprima, Francesco; Santoni, Fabio; Piergentili, Fabrizio; Fortunato, Vito; Abbattista, Cristoforo; Amoruso, Leonardo

    2018-04-01

    In the last years, the increasing number of space debris has triggered the need of a distributed monitoring system for the prevention of possible space collisions. Space surveillance based on ground telescope allows the monitoring of the traffic of the Resident Space Objects (RSOs) in the Earth orbit. This space debris surveillance has several applications such as orbit prediction and conjunction assessment. In this paper is proposed an optimized and performance-oriented pipeline for sources extraction intended to the automatic detection of space debris in optical data. The detection method is based on the morphological operations and Hough Transform for lines. Near real-time detection is obtained using General Purpose computing on Graphics Processing Units (GPGPU). The high degree of processing parallelism provided by GPGPU allows to split data analysis over thousands of threads in order to process big datasets with a limited computational time. The implementation has been tested on a large and heterogeneous images data set, containing both imaging satellites from different orbit ranges and multiple observation modes (i.e. sidereal and object tracking). These images were taken during an observation campaign performed from the EQUO (EQUatorial Observatory) observatory settled at the Broglio Space Center (BSC) in Kenya, which is part of the ASI-Sapienza Agreement.

  12. Stratigraphy and Surface Ages of Dwarf Planet (1) Ceres: Results from Geologic and Topographic Mapping in Survey, HAMO and LAMO Data of the Dawn Framing Camera Images

    NASA Astrophysics Data System (ADS)

    Wagner, R. J.; Schmedemann, N.; Stephan, K.; Jaumann, R.; Neesemann, A.; Preusker, F.; Kersten, E.; Roatsch, T.; Hiesinger, H.; Williams, D. A.; Yingst, R. A.; Crown, D. A.; Mest, S. C.; Raymond, C. A.; Russell, C. T.

    2017-12-01

    Since March 6, 2015, the surface of dwarf planet (1) Ceres is being imaged by the FC framing camera aboard the Dawn spacecraft from orbit at various altitudes [1]. For this study we focus on images from the Survey orbit phase (4424 km altitude) with spatial resolutions of 400 m/pxl and use images and topographic data from DTMs (digital terrain models) for global geologic mapping. On Ceres' surface cratered plains are ubiquitous, with variations in superimposed crater frequency indicating different ages and processes. Here, we take the topography into account for geologic mapping and discriminate cratered plains units according to their topographic level - high-standing, medium, or low-lying - in order to examine a possible correlation between topography and surface age. Absolute model ages (AMAs) are derived from two impact cratering chronology models discussed in detail by [2] (henceforth termed LDM: lunar-derived model, and ADM: asteroid-derived model). We also apply an improved method to obtain relative ages and AMAs from crater frequency measurements termed Poisson timing analysis [3]. Our ongoing analysis shows no trend that the topographic level has an influence on the age of the geologic units. Both high-standing and low-lying cratered plains have AMAs ranging from 3.5 to 1.5 Ga (LDM), versus 4.2 to 0.5 Ga (ADM). Some areas of measurement within these units, however, show effects of resurfacing processes in their crater distributions and feature an older and a younger age. We use LAMO data (altitude: 375 km; resolution 30 m/pxl) and/or HAMO data (altitude: 1475 km; resolution 140 m/pxl) to study local geologic units and their ages, e.g., smaller impact craters, especially those not dated so far with crater measurements and/or those with specific spectral properties [4], deposits of mass wasting (e.g., landslides), and mountains, such as Ahuna Mons. Crater frequencies are used to set these geologic units into the context of Ceres' time-stratigraphic system and chronologic periods [5]. References: [1] Russell C. T., et al. (2016), Science 353, doi:10.1126/science.aaf4219. [2] Hiesinger H. H. et al. (2016), Science 353, doi:10.1126/science.aaf4759. [3] Michael G. G. et al. (2016), Icarus 277, 279-285. [4] Stephan K. et al. (2017), submitted to Icarus. [5] Mest S. C. et al. (2017), LPSC XLVIII, abstr. No. 2512.

  13. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  14. Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.

    PubMed

    Gong, Changcheng; Cai, Yufang; Zeng, Li

    2018-01-01

    For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.

  15. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Naizhuo; Zhou, Yuyu; Samson, Eric L.

    The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, andmore » population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.« less

  17. Geologic Map of the MTM-85000 Quadrangle, Planum Australe Region of Mars

    USGS Publications Warehouse

    Herkenhoff, Ken E.

    2001-01-01

    Introduction The polar deposits on Mars probably record martian climate history over the last 107 to 109 years (for example, Thomas and others, 1992). The area shown on this map includes layered polar deposits and residual polar ice, as well as some exposures of older terrain. Howard and others (1982) noted that an area (at lat 84.8 S., long 356 W.) near a 23-km diameter impact crater (Plaut and others, 1988) appears to have undergone recent deposition, as evidenced by the partial burial of secondary craters. Herkenhoff and Murray (1990a) mapped this area as a mixture of frost and defrosted ground and suggested that the presence of frost throughout the year stabilizes dust deposited in this area. This quadrangle was mapped using high-resolution Mariner 9 (table 1) and Viking Orbiter images in order to study the relations among erosional, cratering, and depositional processes on the polar layered deposits and to search for further evidence of recent deposition. Published geologic maps of the south polar region of Mars are based on images acquired by Mariner 9 (Condit and Soderblom, 1978; Scott and Carr, 1978) and the Viking Orbiters (Tanaka and Scott, 1987). The extent of the layered deposits mapped previously from Mariner 9 data is different from that mapped using Viking Orbiter images, and the present map agrees with the map by Tanaka and Scott (1987): the layered deposits extend to the northern boundary of the map area. However, the oldest unit in this area is mapped as undivided material (unit HNu) rather than the hilly unit in the plateau sequence (unit Nplh; Tanaka and Scott, 1987). The residual polar ice cap, areas of partial frost cover, the layered deposits, and two nonvolatile surface units-the dust mantle and the dark material-were mapped by Herkenhoff and Murray (1990a) at 1:2,000,000 scale using a color mosaic of Viking Orbiter images. This mosaic was used to confirm the identification of the non-volatile Amazonian units for this map and to test hypotheses for their origin and evolution. The colors and albedos of these units, as measured in places both within and outside of this map area, are presented in table 2 and figure 1. The red/violet ratio image was particularly useful in distinguishing the various low-albedo materials, as brightness variations due to topography are essentially removed in such ratio images and color variations are easily seen. Because the resolution of the color mosaics is not sufficient to map these units in detail at 1:500,000 scale, contacts between them were recognized and mapped using higher resolution black and white Viking and Mariner 9 images. The largest impact crater in the layered deposits, 23 km in diameter at lat 84.5 S., long 359 W., now named 'McMurdo,' was recognized by Plaut and others (1988). The northern rim of this crater is missing, perhaps due to erosion of the layered deposits in which it was formed (fig. 2). Secondary craters from this impact are not observed north of the crater but are abundant to the south. Although the crater statistics are poor (only 16 likely impact craters found in Viking Orbiter images of the south polar layered deposits), these observations generally support the conclusions that the south polar layered deposits are Late Amazonian in age and that some areas have been exposed for about 120 million years (Plaut and others, 1988; Herkenhoff and Murray, 1992, 1994; Herkenhoff, 1998). However, the recent cratering flux on Mars is poorly constrained, so inferred ages of surface units are uncertain. The Viking Orbiter 2 images used to construct the base were taken during the southern summer of 1977, with resolutions no better than 130 m/pixel. A digital mosaic of Mariner 9 images also was constructed to aid in mapping. The Mariner 9 images were taken during the southern summer of 1971 and 1972 and have resolutions as high as 85 m/pixel (table 1). However, the usefulness of the Mariner 9 mosaic image is limited by incomplete coverag

  18. Forensic imaging tools for law enforcement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SMITHPETER,COLIN L.; SANDISON,DAVID R.; VARGO,TIMOTHY D.

    2000-01-01

    Conventional methods of gathering forensic evidence at crime scenes are encumbered by difficulties that limit local law enforcement efforts to apprehend offenders and bring them to justice. Working with a local law-enforcement agency, Sandia National Laboratories has developed a prototype multispectral imaging system that can speed up the investigative search task and provide additional and more accurate evidence. The system, called the Criminalistics Light-imaging Unit (CLU), has demonstrated the capabilities of locating fluorescing evidence at crime scenes under normal lighting conditions and of imaging other types of evidence, such as untreated fingerprints, by direct white-light reflectance. CLU employs state ofmore » the art technology that provides for viewing and recording of the entire search process on videotape. This report describes the work performed by Sandia to design, build, evaluate, and commercialize CLU.« less

  19. INFINITY harvest

    NASA Image and Video Library

    2012-05-07

    Janice Hueschen of Innovative Imaging & Research Corp. at Stennis Space Center helps students from Benjamin E. Mays Preparatory School in New Orleans harvest lettuce at the INFINITY at NASA Stennis Space Center facility May 7, 2012. The Louisiana students assisted in the first harvest of lettuce from the Controlled Environment Agriculture unit, which uses an aeroponic process that involves no soil and advance LED lighting techniques.

  20. Rapid measurement of cotton fiber maturity and fineness by image analysis microscopy using the Cottonscope®

    USDA-ARS?s Scientific Manuscript database

    Two of the important cotton fiber quality and processing parameters are fiber maturity and fineness. Fiber maturity is the degree of development of the fiber’s secondary wall, and fiber fineness is a measure of the fiber’s linear density and can be expressed as mass per unit length. A well-known m...

  1. The "c" Equivalence Principle and the Correct form of Writing Maxwell's Equations

    ERIC Educational Resources Information Center

    Heras, Jose A.

    2010-01-01

    It is well known that the speed [image omitted] is obtained in the process of defining SI units via action-at-a-distance forces, like the force between two static charges and the force between two long and parallel currents. The speed c[subscript u] is then physically different from the observed speed of propagation c associated with…

  2. Autofocus system and autofocus method for focusing on a surface

    DOEpatents

    O'Neill, Mary Morabito

    2017-05-23

    An autofocus system includes an imaging device, a lens system and a focus control actuator that is configured to change a focus position of the imaging device in relation to a stage. The electronic control unit is configured to control the focus control actuator to a plurality of predetermined focus positions, and activate the imaging device to obtain an image at predetermined positions and then apply a spatial filter to the obtained images. This generates a filtered image for the obtained images. The control unit determines a focus score for the filtered images such that the focus score corresponds to a degree of focus in the obtained images. The control unit identifies a best focus position by comparing the focus score of the filtered images, and controls the focus control actuator to the best focus position corresponding to the highest focus score.

  3. Monte Carlo-based fluorescence molecular tomography reconstruction method accelerated by a cluster of graphic processing units.

    PubMed

    Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming

    2011-02-01

    High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.

  4. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  5. SSUSI-Lite: a far-ultraviolet hyper-spectral imager for space weather remote sensing

    NASA Astrophysics Data System (ADS)

    Ogorzalek, Bernard; Osterman, Steven; Carlsson, Uno; Grey, Matthew; Hicks, John; Hourani, Ramsey; Kerem, Samuel; Marcotte, Kathryn; Parker, Charles; Paxton, Larry J.

    2015-09-01

    SSUSI-Lite is a far-ultraviolet (115-180nm) hyperspectral imager for monitoring space weather. The SSUSI and GUVI sensors, its predecessors, have demonstrated their value as space weather monitors. SSUSI-Lite is a refresh of the Special Sensor Ultraviolet Spectrographic Imager (SSUSI) design that has flown on the Defense Meteorological Satellite Program (DMSP) spacecraft F16 through F19. The refresh updates the 25-year-old design and insures that the next generation of SSUSI/GUVI sensors can be accommodated on any number of potential platforms. SSUSI-Lite maintains the same optical layout as SSUSI, includes updates to key functional elements, and reduces the sensor volume, mass, and power requirements. SSUSI-Lite contains an improved scanner design that results in precise mirror pointing and allows for variable scan profiles. The detector electronics have been redesigned to employ all digital pulse processing. The largest decrease in volume, mass, and power has been obtained by consolidating all control and power electronics into one data processing unit.

  6. Detection and Length Estimation of Linear Scratch on Solid Surfaces Using an Angle Constrained Ant Colony Technique

    NASA Astrophysics Data System (ADS)

    Pal, Siddharth; Basak, Aniruddha; Das, Swagatam

    In many manufacturing areas the detection of surface defects is one of the most important processes in quality control. Currently in order to detect small scratches on solid surfaces most of the industries working on material manufacturing rely on visual inspection primarily. In this article we propose a hybrid computational intelligence technique to automatically detect a linear scratch from a solid surface and estimate its length (in pixel unit) simultaneously. The approach is based on a swarm intelligence algorithm called Ant Colony Optimization (ACO) and image preprocessing with Wiener and Sobel filters as well as the Canny edge detector. The ACO algorithm is mostly used to compensate for the broken parts of the scratch. Our experimental results confirm that the proposed technique can be used for detecting scratches from noisy and degraded images, even when it is very difficult for conventional image processing to distinguish the scratch area from its background.

  7. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking.

    PubMed

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J; Jian, Yifan; Sarunic, Marinko V

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  8. Practical Considerations for Optic Nerve Estimation in Telemedicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karnowski, Thomas Paul; Aykac, Deniz; Chaum, Edward

    The projected increase in diabetes in the United States and worldwide has created a need for broad-based, inexpensive screening for diabetic retinopathy (DR), an eye disease which can lead to vision impairment. A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion / anomaly detection is a low-cost way of achieving broad-based screening. In this work we report on the effect of quality estimation on an optic nerve (ON) detection method with a confidence metric. We report on an improvement of the fusion technique using a data set from an ophthalmologists practice then show themore » results of the method as a function of image quality on a set of images from an on-line telemedicine network collected in Spring 2009 and another broad-based screening program. We show that the fusion method, combined with quality estimation processing, can improve detection performance and also provide a method for utilizing a physician-in-the-loop for images that may exceed the capabilities of automated processing.« less

  9. FPGA Implementation of the Coupled Filtering Method and the Affine Warping Method.

    PubMed

    Zhang, Chen; Liang, Tianzhu; Mok, Philip K T; Yu, Weichuan

    2017-07-01

    In ultrasound image analysis, the speckle tracking methods are widely applied to study the elasticity of body tissue. However, "feature-motion decorrelation" still remains as a challenge for the speckle tracking methods. Recently, a coupled filtering method and an affine warping method were proposed to accurately estimate strain values, when the tissue deformation is large. The major drawback of these methods is the high computational complexity. Even the graphics processing unit (GPU)-based program requires a long time to finish the analysis. In this paper, we propose field-programmable gate array (FPGA)-based implementations of both methods for further acceleration. The capability of FPGAs on handling different image processing components in these methods is discussed. A fast and memory-saving image warping approach is proposed. The algorithms are reformulated to build a highly efficient pipeline on FPGA. The final implementations on a Xilinx Virtex-7 FPGA are at least 13 times faster than the GPU implementation on the NVIDIA graphic card (GeForce GTX 580).

  10. National perspective on in-hospital emergency units in Iraq

    PubMed Central

    Lafta, Riyadh K.; Al-Nuaimi, Maha A.

    2013-01-01

    Background: Hospitals play a crucial role in providing communities with essential medical care during times of disasters. The emergency department is the most vital component of hospitals' inpatient business. In Iraq, at present, there are many casualties that cause a burden of work and the need for structural assessment, equipment updating and evaluation of process. Objective: To examine the current pragmatic functioning of the existing set-up of services of in-hospital emergency departments within some general hospitals in Baghdad and Mosul in order to establish a mechanism for future evaluation for the health services in our community. Methods: A cross-sectional study was employed to evaluate the structure, process and function of six major hospitals with emergency units: four major hospitals in Baghdad and two in Mosul. Results: The six surveyed emergency units are distinct units within general hospitals that serve (collectively) one quarter of the total population. More than one third of these units feature observation unit beds, laboratory services, imaging facilities, pharmacies with safe storage, and ambulatory entrance. Operation room was found only in one hospital's reception and waiting area. Consultation/track area, cubicles for infection control, and discrete tutorial rooms were not available. Patient assessment was performed (although without adequate privacy). The emergency specialist, family medicine specialist and interested general practitioner exist in one-third of the surveyed units. Psychiatrist, physiotherapists, occupational therapists, and social work links are not available. The shortage in medication, urgent vaccines and vital facilities is an obvious problem. Conclusions: Our emergency unit's level and standards of care are underdeveloped. The inconsistent process and inappropriate environments need to be reconstructed. The lack of drugs, commodities, communication infrastructure, audit and training all require effective build up. PMID:25003053

  11. KSC-04PD-1812

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. In the Orbiter Processing Facility, United Space Alliance worker Craig Meyer fits an External Tank (ET) digital still camera in the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis. NASA is pursuing use of the camera, beginning with the Shuttles Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  12. KSC-04pd1812

    NASA Image and Video Library

    2004-09-17

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, United Space Alliance worker Craig Meyer fits an External Tank (ET) digital still camera in the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis. NASA is pursuing use of the camera, beginning with the Shuttle’s Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  13. Span graphics display utilities handbook, first edition

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Green, J. L.; Newman, R.

    1985-01-01

    The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.

  14. Parallel ptychographic reconstruction

    DOE PAGES

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; ...

    2014-12-19

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps tomore » take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.« less

  15. MIGS-GPU: Microarray Image Gridding and Segmentation on the GPU.

    PubMed

    Katsigiannis, Stamos; Zacharia, Eleni; Maroulis, Dimitris

    2017-05-01

    Complementary DNA (cDNA) microarray is a powerful tool for simultaneously studying the expression level of thousands of genes. Nevertheless, the analysis of microarray images remains an arduous and challenging task due to the poor quality of the images that often suffer from noise, artifacts, and uneven background. In this study, the MIGS-GPU [Microarray Image Gridding and Segmentation on Graphics Processing Unit (GPU)] software for gridding and segmenting microarray images is presented. MIGS-GPU's computations are performed on the GPU by means of the compute unified device architecture (CUDA) in order to achieve fast performance and increase the utilization of available system resources. Evaluation on both real and synthetic cDNA microarray images showed that MIGS-GPU provides better performance than state-of-the-art alternatives, while the proposed GPU implementation achieves significantly lower computational times compared to the respective CPU approaches. Consequently, MIGS-GPU can be an advantageous and useful tool for biomedical laboratories, offering a user-friendly interface that requires minimum input in order to run.

  16. Sensing system for detection and control of deposition on pendant tubes in recovery and power boilers

    DOEpatents

    Kychakoff, George; Afromowitz, Martin A; Hugle, Richard E

    2005-06-21

    A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions and about 4 or 8.7 microns and directly producing images of the interior of the boiler. An image pre-processing circuit (95) in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. An image segmentation module (105) for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. An image-understanding unit (115) matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system (130) for more efficient operation of the plant pendant tube cleaning and operating systems.

  17. Parallel phase-sensitive three-dimensional imaging camera

    DOEpatents

    Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.

    2007-09-25

    An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.

  18. First results from the PROTEIN experiment on board the International Space Station

    NASA Astrophysics Data System (ADS)

    Decanniere, Klaas; Potthast, Lothar; Pletser, Vladimir; Maes, Dominique; Otalora, Fermin; Gavira, Jose A.; Pati, Luis David; Lautenschlager, Peter; Bosch, Robert

    On March 15 2009 Space Shuttle Discovery was launched, carrying the Process Unit of the Protein Crystallization Diagnostics Facility (PCDF) to the International Space Station. It contained the PROTEIN experiment, aiming at the in-situ observation of nucleation and crystal growth behaviour of proteins. After installation in the European Drawer Rack (EDR) and connection to the PCDF Electronics Unit, experiment runs were performed continuously for 4 months. It was the first time that protein crystallization experiments could be modified on-orbit in near real-time, based on data received on ground. The data included pseudo-dark field microscope images, interferograms, and Dynamic Light Scattering data. The Process Unit with space grown crystals was returned to ground on July 31 2009. Results for the model protein glucose isomerase (Glucy) from Streptomyces rubiginosus crystallized with ammonium sulfate will be reported concerning nucleation and the growth from Protein and Impurities Depletion Zones (PDZs). In addition, results of x-ray analyses for space-grown crystals will be given.

  19. Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2011-06-01

    With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.

  20. A color-corrected strategy for information multiplexed Fourier ptychographic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Mingqun; Zhang, Yuzhen; Chen, Qian; Sun, Jiasong; Fan, Yao; Zuo, Chao

    2017-12-01

    Fourier ptychography (FP) is a novel computational imaging technique that provides both wide field of view (FoV) and high-resolution (HR) imaging capacity for biomedical imaging. Combined with information multiplexing technology, wavelength multiplexed (or color multiplexed) FP imaging can be implemented by lighting up R/G/B LED units simultaneously. Furthermore, a HR image can be recovered at each wavelength from the multiplexed dataset. This enhances the efficiency of data acquisition. However, since the same dataset of intensity measurement is used to recover the HR image at each wavelength, the mean value in each channel would converge to the same value. In this paper, a color correction strategy embedded in the multiplexing FP scheme is demonstrated, which is termed as color corrected wavelength multiplexed Fourier ptychography (CWMFP). Three images captured by turning on a LED array in R/G/B are required as priori knowledge to improve the accuracy of reconstruction in the recovery process. Using the reported technique, the redundancy requirement of information multiplexed FP is reduced. Moreover, the accuracy of reconstruction at each channel is improved with correct color reproduction of the specimen.

  1. Blind retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  2. An Efficient Computational Framework for the Analysis of Whole Slide Images: Application to Follicular Lymphoma Immunohistochemistry

    PubMed Central

    Samsi, Siddharth; Krishnamurthy, Ashok K.; Gurcan, Metin N.

    2012-01-01

    Follicular Lymphoma (FL) is one of the most common non-Hodgkin Lymphoma in the United States. Diagnosis and grading of FL is based on the review of histopathological tissue sections under a microscope and is influenced by human factors such as fatigue and reader bias. Computer-aided image analysis tools can help improve the accuracy of diagnosis and grading and act as another tool at the pathologist’s disposal. Our group has been developing algorithms for identifying follicles in immunohistochemical images. These algorithms have been tested and validated on small images extracted from whole slide images. However, the use of these algorithms for analyzing the entire whole slide image requires significant changes to the processing methodology since the images are relatively large (on the order of 100k × 100k pixels). In this paper we discuss the challenges involved in analyzing whole slide images and propose potential computational methodologies for addressing these challenges. We discuss the use of parallel computing tools on commodity clusters and compare performance of the serial and parallel implementations of our approach. PMID:22962572

  3. IR sensors and imagers in networked operations

    NASA Astrophysics Data System (ADS)

    Breiter, Rainer; Cabanski, Wolfgang

    2005-05-01

    "Network-centric Warfare" is a common slogan describing an overall concept of networked operation of sensors, information and weapons to gain command and control superiority. Referring to IR sensors, integration and fusion of different channels like day/night or SAR images or the ability to spread image data among various users are typical requirements. Looking for concrete implementations the German Army future infantryman IdZ is an example where a group of ten soldiers build a unit with every soldier equipped with a personal digital assistant (PDA) for information display, day photo camera and a high performance thermal imager for every unit. The challenge to allow networked operation among such a unit is bringing information together and distribution over a capable network. So also AIM's thermal reconnaissance and targeting sight HuntIR which was selected for the IdZ program provides this capabilities by an optional wireless interface. Besides the global approach of Network-centric Warfare network technology can also be an interesting solution for digital image data distribution and signal processing behind the FPA replacing analog video networks or specific point to point interfaces. The resulting architecture can provide capabilities of data fusion from e.g. IR dual-band or IR multicolor sensors. AIM has participated in a German/UK collaboration program to produce a demonstrator for day/IR video distribution via Gigabit Ethernet for vehicle applications. In this study Ethernet technology was chosen for network implementation and a set of electronics was developed for capturing video data of IR and day imagers and Gigabit Ethernet video distribution. The demonstrator setup follows the requirements of current and future vehicles having a set of day and night imager cameras and a crew station with several members. Replacing the analog video path by a digital video network also makes it easy to implement embedded training by simply feeding the network with simulation data. The paper addresses the special capabilities, requirements and design considerations of IR sensors and imagers in applications like thermal weapon sights and UAVs for networked operating infantry forces.

  4. A Detailed Geomorphological Sketch Map of Titan's Afekan Crater Region

    NASA Astrophysics Data System (ADS)

    Schoenfeld, A.; Malaska, M. J.; Lopes, R. M. C.; Le Gall, A. A.; Birch, S. P.; Hayes, A.

    2014-12-01

    Due to Titan's uniquely thick atmosphere and organic haze layers, the most detailed images (with resolution of 300 meters per pixel) of the Saturnian moon's surface exist as Synthetic Aperture Radar (SAR) images taken by Cassini's RADAR instrument. Using the SAR data, we have been putting together detailed geomorphological sketch maps of various Titan regions in an effort to piece together its geologic history. We initially examined the Afekan region of Titan due to extensive SAR coverage. Features described on Afekan fall under the categories (in order of geologic age, extrapolated from their relative emplacement) of hummocky, labyrinthic, plains, and dunes. During our mapping effort, we also divided each terrain category into several different subclasses on a local level. Our map offers a chance to present and analyze the distribution, relationship, and potential formation hypotheses of the different terrains. In bulk, we find evidence for both Aeolian and fluvial processes. A particularly important unit found in the Afekan region is the unit designated "undifferentiated plains", or the "Blandlands" of Titan, a mid-latitude terrain unit comprising 25% of the moon's surface. Undifferentiated plains are notable for its relative featurelessness in radar and infrared. Our interpretation is that it is a fill unit in and around Afekan crater and other hummocky/mountainous units. The plains suggest that the nature of Titan's geomorphology seems to be tied to ongoing erosional forces and sediment deposition. Other datasets used in characterizing Titan's various geomorphological units include information obtained from radiometry, infrared (ISS), and spectrometry (VIMS). We will present the detailed geomorphological sketch map with all the terrain units assigned and labeled.

  5. Beyond endoscopic assessment in inflammatory bowel disease: real-time histology of disease activity by non-linear multimodal imaging

    NASA Astrophysics Data System (ADS)

    Chernavskaia, Olga; Heuke, Sandro; Vieth, Michael; Friedrich, Oliver; Schürmann, Sebastian; Atreya, Raja; Stallmach, Andreas; Neurath, Markus F.; Waldner, Maximilian; Petersen, Iver; Schmitt, Michael; Bocklitz, Thomas; Popp, Jürgen

    2016-07-01

    Assessing disease activity is a prerequisite for an adequate treatment of inflammatory bowel diseases (IBD) such as Crohn’s disease and ulcerative colitis. In addition to endoscopic mucosal healing, histologic remission poses a promising end-point of IBD therapy. However, evaluating histological remission harbors the risk for complications due to the acquisition of biopsies and results in a delay of diagnosis because of tissue processing procedures. In this regard, non-linear multimodal imaging techniques might serve as an unparalleled technique that allows the real-time evaluation of microscopic IBD activity in the endoscopy unit. In this study, tissue sections were investigated using the non-linear multimodal microscopy combination of coherent anti-Stokes Raman scattering (CARS), two-photon excited auto fluorescence (TPEF) and second-harmonic generation (SHG). After the measurement a gold-standard assessment of histological indexes was carried out based on a conventional H&E stain. Subsequently, various geometry and intensity related features were extracted from the multimodal images. An optimized feature set was utilized to predict histological index levels based on a linear classifier. Based on the automated prediction, the diagnosis time interval is decreased. Therefore, non-linear multimodal imaging may provide a real-time diagnosis of IBD activity suited to assist clinical decision making within the endoscopy unit.

  6. Good reasons to implement quality assurance in nationwide breast cancer screening programs in Croatia and Serbia: results from a pilot study.

    PubMed

    Ciraj-Bjelac, Olivera; Faj, Dario; Stimac, Damir; Kosutic, Dusko; Arandjic, Danijela; Brkic, Hrvoje

    2011-04-01

    The purpose of this study is to investigate the need for and the possible achievements of a comprehensive QA programme and to look at effects of simple corrective actions on image quality in Croatia and in Serbia. The paper focuses on activities related to the technical and radiological aspects of QA. The methodology consisted of two phases. The aim of the first phase was the initial assessment of mammography practice in terms of image quality, patient dose and equipment performance in selected number of mammography units in Croatia and Serbia. Subsequently, corrective actions were suggested and implemented. Then the same parameters were re-assessed. Most of the suggested corrective actions were simple, low-cost and possible to implement immediately, as these were related to working habits in mammography units, such as film processing and darkroom conditions. It has been demonstrated how simple quantitative assessment of image quality can be used for optimisation purposes. Analysis of image quality parameters as OD, gradient and contrast demonstrated general similarities between mammography practices in Croatia and Serbia. The applied methodology should be expanded to larger number of hospitals and applied on a regular basis. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.

  7. Disaggregating and mapping crop statistics using hypertemporal remote sensing

    NASA Astrophysics Data System (ADS)

    Khan, M. R.; de Bie, C. A. J. M.; van Keulen, H.; Smaling, E. M. A.; Real, R.

    2010-02-01

    Governments compile their agricultural statistics in tabular form by administrative area, which gives no clue to the exact locations where specific crops are actually grown. Such data are poorly suited for early warning and assessment of crop production. 10-Daily satellite image time series of Andalucia, Spain, acquired since 1998 by the SPOT Vegetation Instrument in combination with reported crop area statistics were used to produce the required crop maps. Firstly, the 10-daily (1998-2006) 1-km resolution SPOT-Vegetation NDVI-images were used to stratify the study area in 45 map units through an iterative unsupervised classification process. Each unit represents an NDVI-profile showing changes in vegetation greenness over time which is assumed to relate to the types of land cover and land use present. Secondly, the areas of NDVI-units and the reported cropped areas by municipality were used to disaggregate the crop statistics. Adjusted R-squares were 98.8% for rainfed wheat, 97.5% for rainfed sunflower, and 76.5% for barley. Relating statistical data on areas cropped by municipality with the NDVI-based unit map showed that the selected crops were significantly related to specific NDVI-based map units. Other NDVI-profiles did not relate to the studied crops and represented other types of land use or land cover. The results were validated by using primary field data. These data were collected by the Spanish government from 2001 to 2005 through grid sampling within agricultural areas; each grid (block) contains three 700 m × 700 m segments. The validation showed 68%, 31% and 23% variability explained (adjusted R-squares) between the three produced maps and the thousands of segment data. Mainly variability within the delineated NDVI-units caused relatively low values; the units are internally heterogeneous. Variability between units is properly captured. The maps must accordingly be considered "small scale maps". These maps can be used to monitor crop performance of specific cropped areas because of using hypertemporal images. Early warning thus becomes more location and crop specific because of using hypertemporal remote sensing.

  8. Time scales of erosion and deposition recorded in the residual south polar cap of Mars

    NASA Astrophysics Data System (ADS)

    Thomas, P. C.; Calvin, W. M.; Gierasch, P.; Haberle, R.; James, P. B.; Sholes, S.

    2013-08-01

    The residual south polar cap (RSPC) of Mars has been subject to competing processes during recent Mars years of high resolution image coverage: continuing erosion of scarps while the maximum extent grows as well as shrinks (Piqueux, S., Christensen, P.R. [2008]. J. Geophys. Res. (Planets) 113, 2006; James, P.B., Thomas, P.C., Malin, M.C. [2010]. Icarus 208, 82-85). Additionally, the cap has a variety of morphologies and erosion (scarp retreat) rates (Thomas, P.C., James, P.B., Calvin, W.M., Haberle, R., Malin, M.C. [2009]. Icarus 203, 352-375). Do these different forms and competing processes indicate an aging and possibly disappearing cap, a growing cap, or a fluctuating cap, and is it possible to infer the timescales of the processes acting on the RSPC? Here we use the latest imaging data from Mars' southern summer in Mars year 30 (Calendar year 2011) to evaluate erosion rates of forms in the RSPC over 6 Mars years, and to map more fully features whose sizes can be used to predict deposit ages. Data through Mars year 30 show that scarp retreat rates in the RSPC have remained approximately the same for at least 6 Mars years and that these rates of erosion also apply approximately over the past 21 Mars years. The thicker units appear to have undergone changes in the locations of new pit formation about 30-50 Mars years ago. The thinner units have some areas that are possibly 80 Mars years old, with some younger materials having accumulated more than a meter in thickness since Mars year 9. Formation of the thicker units probably required over 100 Mars years. The upper surfaces of most areas, especially the thicker units, show little change at the few-cm level over the last 2 Mars years. This observation suggests that current conditions are substantially different from those when the thicker units were deposited. A prime characteristic of the evolution of the RSPC is that some changes are progressive, such as those involving scarp retreat, while others, such as the geography of initiation of new pits or the areal coverage of ice, appear to be more episodic.

  9. A New Look at the American West: Lessons for Secondary History and Literature Classes.

    ERIC Educational Resources Information Center

    Eastman, Gloria, Ed.; Miller, Barbara, Ed.

    This curriculum unit analyzes the common cultural images people have about the western United States and how incomplete those images are. The lessons are divided into five sections. The first section, "Investigating Images and Assumptions," presents four lessons to engage students in beginning the examination of their images and…

  10. Fiber-optic fringe projection with crosstalk reduction by adaptive pattern masking

    NASA Astrophysics Data System (ADS)

    Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard

    2017-02-01

    To enable in-process inspection of industrial manufacturing processes, measuring devices need to fulfill time and space constraints, while also being robust to environmental conditions, such as high temperatures and electromagnetic fields. A new fringe projection profilometry system is being developed, which is capable of performing the inspection of filigree tool geometries, e.g. gearing elements with tip radii of 0.2 mm, inside forming machines of the sheet-bulk metal forming process. Compact gradient-index rod lenses with a diameter of 2 mm allow for a compact design of the sensor head, which is connected to a base unit via flexible high-resolution image fibers with a diameter of 1.7 mm. The base unit houses a flexible DMD based LED projector optimized for fiber coupling and a CMOS camera sensor. The system is capable of capturing up to 150 gray-scale patterns per second as well as high dynamic range images from multiple exposures. Owing to fiber crosstalk and light leakage in the image fiber, signal quality suffers especially when capturing 3-D data of technical surfaces with highly varying reflectance or surface angles. An algorithm is presented, which adaptively masks parts of the pattern to reduce these effects via multiple exposures. The masks for valid surface areas are automatically defined according to different parameters from an initial capture, such as intensity and surface gradient. In a second step, the masks are re-projected to projector coordinates using the mathematical model of the system. This approach is capable of reducing both inter-pixel crosstalk and inter-object reflections on concave objects while maintaining measurement durations of less than 5 s.

  11. Image acquisition unit for the Mayo/IBM PACS project

    NASA Astrophysics Data System (ADS)

    Reardon, Frank J.; Salutz, James R.

    1991-07-01

    The Mayo Clinic and IBM Rochester, Minnesota, have jointly developed a picture archiving, distribution and viewing system for use with Mayo's CT and MRI imaging modalities. Images are retrieved from the modalities and sent over the Mayo city-wide token ring network to optical storage subsystems for archiving, and to server subsystems for viewing on image review stations. Images may also be retrieved from archive and transmitted back to the modalities. The subsystems that interface to the modalities and communicate to the other components of the system are termed Image Acquisition Units (LAUs). The IAUs are IBM Personal System/2 (PS/2) computers with specially developed software. They operate independently in a network of cooperative subsystems and communicate with the modalities, archive subsystems, image review server subsystems, and a central subsystem that maintains information about the content and location of images. This paper provides a detailed description of the function and design of the Image Acquisition Units.

  12. Video rate morphological processor based on a redundant number representation

    NASA Astrophysics Data System (ADS)

    Kuczborski, Wojciech; Attikiouzel, Yianni; Crebbin, Gregory A.

    1992-03-01

    This paper presents a video rate morphological processor for automated visual inspection of printed circuit boards, integrated circuit masks, and other complex objects. Inspection algorithms are based on gray-scale mathematical morphology. Hardware complexity of the known methods of real-time implementation of gray-scale morphology--the umbra transform and the threshold decomposition--has prompted us to propose a novel technique which applied an arithmetic system without carrying propagation. After considering several arithmetic systems, a redundant number representation has been selected for implementation. Two options are analyzed here. The first is a pure signed digit number representation (SDNR) with the base of 4. The second option is a combination of the base-2 SDNR (to represent gray levels of images) and the conventional twos complement code (to represent gray levels of structuring elements). Operation principle of the morphological processor is based on the concept of the digit level systolic array. Individual processing units and small memory elements create a pipeline. The memory elements store current image windows (kernels). All operation primitives of processing units apply a unified direction of digit processing: most significant digit first (MSDF). The implementation technology is based on the field programmable gate arrays by Xilinx. This paper justified the rationality of a new approach to logic design, which is the decomposition of Boolean functions instead of Boolean minimization.

  13. Cortical Merging in S1 as a Substrate for Tactile Input Grouping

    PubMed Central

    Zennou-Azogui, Yoh’I; Xerri, Christian

    2018-01-01

    Abstract Perception is a reconstruction process guided by rules based on knowledge about the world. Little is known about the neural implementation of the rules of object formation in the tactile sensory system. When two close tactile stimuli are delivered simultaneously on the skin, subjects feel a unique sensation, spatially centered between the two stimuli. Voltage-sensitive dye imaging (VSDi) and electrophysiological recordings [local field potentials (LFPs) and single units] were used to extract the cortical representation of two-point tactile stimuli in the primary somatosensory cortex of anesthetized Long-Evans rats. Although layer 4 LFP responses to brief costimulation of the distal region of two digits resembled the sum of individual responses, approximately one-third of single units demonstrated merging-compatible changes. In contrast to previous intrinsic optical imaging studies, VSD activations reflecting layer 2/3 activity were centered between the representations of the digits stimulated alone. This merging was found for every tested distance between the stimulated digits. We discuss this laminar difference as evidence that merging occurs through a buildup stream and depends on the superposition of inputs, which increases with successive stages of sensory processing. These findings show that layers 2/3 are involved in the grouping of sensory inputs. This process that could be inscribed in the cortical computing routine and network organization is likely to promote object formation and implement perception rules. PMID:29354679

  14. Single-chip microcomputer for image processing in the photonic measuring system

    NASA Astrophysics Data System (ADS)

    Smoleva, Olga S.; Ljul, Natalia Y.

    2002-04-01

    The non-contact measuring system has been designed for rail- track parameters control on the Moscow Metro. It detects some significant parameters: rail-track width, rail-track height, gage, rail-slums, crosslevel, pickets, and car speed. The system consists of three subsystems: non-contact system of rail-track width, height, and gage inspection, non-contact system of rail-slums inspection and subsystem for crosslevel, speed, and pickets detection. Data from subsystems is transferred to pre-processing unit. In order to process data received from subsystems, the single-chip signal processor ADSP-2185 must be used due to providing required processing speed. After data will be processed, it is send to PC, which processes it and outputs it in the readable form.

  15. Disaster-hardened imaging POD for PACS

    NASA Astrophysics Data System (ADS)

    Honeyman-Buck, Janice; Frost, Meryll

    2005-04-01

    After the events of 9/11, many people questioned their ability to keep critical services operational in the face of massive infrastructure failure. Hospitals increased their backup and recovery power, made plans for emergency water and food, and operated on a heightened alert awareness with more frequent disaster drills. In a film-based radiology department, if a portable X-ray unit, a CT unit, an Ultrasound unit, and an film processor could be operated on emergency power, a limited, but effective number of studies could be performed. However, in a digital department, there is a reliance on the network infrastructure to deliver images to viewing locations. The system developed for our institution uses several imaging PODS, a name we chose because it implied to us a safe, contained environment. Each POD is a stand-alone emergency powered network capable of generating images and displaying them in the POD or printing them to a DICOM printer. The technology we used to create a POD consists of a computer with dual network interface cards joining our private, local POD network, to the hospital network. In the case of an infrastructure failure, each POD can and does work independently to produce CTs, CRs, and Ultrasounds. The system has been tested during disaster drills and works correctly, producing images using equipment technologists are comfortable using with very few emergency switch-over tasks. Purpose: To provide imaging capabilities in the event of a natural or man-made disaster with infrastructure failure. Method: After the events of 9/11, many people questioned their ability to keep critical services operational in the face of massive infrastructure failure. Hospitals increased their backup and recovery power, made plans for emergency water and food, and operated on a heightened alert awareness with more frequent disaster drills. In a film-based radiology department, if a portable X-ray unit, a CT unit, an Ultrasound unit, and an film processor could be operated on emergency power, a limited, but effective number of studies could be performed. However, in a digital department, there is a reliance on the network infrastructure to deliver images to viewing locations. The system developed for our institution uses several imaging PODS, a name we chose because it implied to us a safe, contained environment. Each POD is on both the standard and the emergency power systems. All the vendor equipment that produces images is on a private, stand-alone network controlled either by a simple or a managed switch. Included in each POD is a dry-process DICOM printer that is rarely used during normal operations and a display workstation. One node on the private network is a PACS application processor (AP) with two network interface cards, one for the private network, one for the standard PACS network. During ordinary daily operations, all acquired images pass through this AP and are routed to the PACS archives, web servers, and workstations. However, if the power and network to much of the hospital were to fail, the stand-alone POD could still function. Images are routed to the AP, but cannot forward to the main network. However, they can be routed to the printer and display in the POD. They are also stored on the AP to continue normal routing when the infrastructure is restored. Results: The imaging PODS have been tested in actual disaster testing where the infrastructure was intentionally removed and worked as designed. To date, we have not had to use them in a real-life scenario and we hope we never do, but we feel we have a reasonable level of emergency imaging capability if we ever need it. Conclusions: Our testing indicates our PODS are a viable way to continue medical imaging in the face of an emergency with a major part of our network and electrical infrastructure destroyed.

  16. An FPGA-Based Rapid Wheezing Detection System

    PubMed Central

    Lin, Bor-Shing; Yen, Tian-Shiue

    2014-01-01

    Wheezing is often treated as a crucial indicator in the diagnosis of obstructive pulmonary diseases. A rapid wheezing detection system may help physicians to monitor patients over the long-term. In this study, a portable wheezing detection system based on a field-programmable gate array (FPGA) is proposed. This system accelerates wheezing detection, and can be used as either a single-process system, or as an integrated part of another biomedical signal detection system. The system segments sound signals into 2-second units. A short-time Fourier transform was used to determine the relationship between the time and frequency components of wheezing sound data. A spectrogram was processed using 2D bilateral filtering, edge detection, multithreshold image segmentation, morphological image processing, and image labeling, to extract wheezing features according to computerized respiratory sound analysis (CORSA) standards. These features were then used to train the support vector machine (SVM) and build the classification models. The trained model was used to analyze sound data to detect wheezing. The system runs on a Xilinx Virtex-6 FPGA ML605 platform. The experimental results revealed that the system offered excellent wheezing recognition performance (0.912). The detection process can be used with a clock frequency of 51.97 MHz, and is able to perform rapid wheezing classification. PMID:24481034

  17. Evaluation of area strain response of dielectric elastomer actuator using image processing technique

    NASA Astrophysics Data System (ADS)

    Sahu, Raj K.; Sudarshan, Koyya; Patra, Karali; Bhaumik, Shovan

    2014-03-01

    Dielectric elastomer actuator (DEA) is a kind of soft actuators that can produce significantly large electric-field induced actuation strain and may be a basic unit of artificial muscles and robotic elements. Understanding strain development on a pre-stretched sample at different regimes of electrical field is essential for potential applications. In this paper, we report about ongoing work on determination of area strain using digital camera and image processing technique. The setup, developed in house consists of low cost digital camera, data acquisition and image processing algorithm. Samples have been prepared by biaxially stretched acrylic tape and supported between two cardboard frames. Carbon-grease has been pasted on the both sides of the sample, which will be compliant with electric field induced large deformation. Images have been grabbed before and after the application of high voltage. From incremental image area, strain has been calculated as a function of applied voltage on a pre-stretched dielectric elastomer (DE) sample. Area strain has been plotted with the applied voltage for different pre-stretched samples. Our study shows that the area strain exhibits nonlinear relationship with applied voltage. For same voltage higher area strain has been generated on a sample having higher pre-stretched value. Also our characterization matches well with previously published results which have been done with costly video extensometer. The study may be helpful for the designers to fabricate the biaxial pre-stretched planar actuator from similar kind of materials.

  18. Modeling the functional repair of nervous tissue in spinal cord injury

    NASA Astrophysics Data System (ADS)

    Mantila, Sara M.; Camp, Jon J.; Krych, Aaron J.; Robb, Richard A.

    2004-05-01

    Functional repair of traumatic spinal cord injury (SCI) is one of the most challenging goals in modern medicine. The annual incidence of SCI in the United States is approximately 11,000 new cases. The prevalence of people in the U.S. currently living with SCI is approximately 200,000. Exploring and understanding nerve regeneration in the central nervous system (CNS) is a critical first step in attempting to reverse the devastating consequences of SCI. At Mayo Clinic, a preliminary study of implants in the transected rat spinal cord model demonstrates potential for promoting axon regeneration. In collaborative research between neuroscientists and bioengineers, this procedure holds promise for solving two critical aspects of axon repair-providing a resorbable structural scaffold to direct focused axon repair, and delivery of relevant signaling molecules necessary to facilitate regeneration. In our preliminary study, regeneration in the rat's spinal cord was modeled in three dimensions utilizing an image processing software system developed in the Biomedical Imaging Resource at Mayo Clinic. Advanced methods for image registration, segmentation, and rendering were used. The raw images were collected at three different magnifications. After image processing the individual channels in the scaffold, axon bundles, and macrophages could be identified. Several axon bundles could be visualized and traced through the entire volume, suggesting axonal growth throughout the length of the scaffold. Such information could potentially allow researchers and physicians to better understand and improve the nerve regeneration process for individuals with SCI.

  19. Satellite Imagery Production and Processing Using Apache Hadoop

    NASA Astrophysics Data System (ADS)

    Hill, D. V.; Werpy, J.

    2011-12-01

    The United States Geological Survey's (USGS) Earth Resources Observation and Science (EROS) Center Land Science Research and Development (LSRD) project has devised a method to fulfill its processing needs for Essential Climate Variable (ECV) production from the Landsat archive using Apache Hadoop. Apache Hadoop is the distributed processing technology at the heart of many large-scale, processing solutions implemented at well-known companies such as Yahoo, Amazon, and Facebook. It is a proven framework and can be used to process petabytes of data on thousands of processors concurrently. It is a natural fit for producing satellite imagery and requires only a few simple modifications to serve the needs of science data processing. This presentation provides an invaluable learning opportunity and should be heard by anyone doing large scale image processing today. The session will cover a description of the problem space, evaluation of alternatives, feature set overview, configuration of Hadoop for satellite image processing, real-world performance results, tuning recommendations and finally challenges and ongoing activities. It will also present how the LSRD project built a 102 core processing cluster with no financial hardware investment and achieved ten times the initial daily throughput requirements with a full time staff of only one engineer. Satellite Imagery Production and Processing Using Apache Hadoop is presented by David V. Hill, Principal Software Architect for USGS LSRD.

  20. An embedded point-of-care malaria screening device for low-resource regions (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Das, Sayantan; Mandal, Subhamoy; Das, Debnath; Malviya, Richa; Garud, Hrushikesh T.; Ray, Ajoy K.

    2016-03-01

    In this article we propose a point-of-care screening device for the detection and identification of malaria parasite, plasmodium vivax, plasmodium malaria, plasmodium oval and plasmodium falciparum with a time frame of 15-20 minute. In our device we can provide 97-98% sensitivity for each species as we are using traditional staining methods for detecting the parasites. In addition, as we are also quantifying the parasites, it is possible to provide an accurate estimate about the malarial stage of the patient. The image processing approach increases the total numbers of samples screened by reducing interventions of trained pathologists. This helps in reducing the delays in screening process arising from increased number of potential cases based on seasonal and local variations. The same reduces mortality rate by faster diagnosis and reduced false negative detections (i.e. increased sensitivity). The system can also be integrated with telemedicine platform to obtain inputs from medical practitioners at tertiary healthcare units for diagnostic decision making. Through this paper, we present the functional prototype of this device containing all the integrated parts. The prototype incorporates image acquisition, image processing, storage, multimedia transmission and reporting environment for a low cost PDA device. It is a portable device capable of scanning slides. The acquired image will be preprocessed and processed to get desired output. The device is capable of transmitting and storing pathological information to database placed in a distant pathological center for further consultation.

  1. A Review of High-Performance Computational Strategies for Modeling and Imaging of Electromagnetic Induction Data

    NASA Astrophysics Data System (ADS)

    Newman, Gregory A.

    2014-01-01

    Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.

  2. GPU real-time processing in NA62 trigger system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-01-01

    A commercial Graphics Processing Unit (GPU) is used to build a fast Level 0 (L0) trigger system tested parasitically with the TDAQ (Trigger and Data Acquisition systems) of the NA62 experiment at CERN. In particular, the parallel computing power of the GPU is exploited to perform real-time fitting in the Ring Imaging CHerenkov (RICH) detector. Direct GPU communication using a FPGA-based board has been used to reduce the data transmission latency. The performance of the system for multi-ring reconstrunction obtained during the NA62 physics run will be presented.

  3. Correlation of morphological and molecular parameters for colon cancer

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai; Roney, Celeste A.; Li, Qian; Jiang, James; Cable, Alex; Summers, Ronald M.; Chen, Yu

    2010-02-01

    Colorectal cancer (CRC) is the second leading cause of cancer death in the United States. There is great interest in studying the relationship among microstructures and molecular processes of colorectal cancer during its progression at early stages. In this study, we use our multi-modality optical system that could obtain co-registered optical coherence tomography (OCT) and fluorescence molecular imaging (FMI) images simultaneously to study CRC. The overexpressed carbohydrate α-L-fucose on the surfaces of polyps facilitates the bond of adenomatous polyps with UEA-1 and is used as biomarker. Tissue scattering coefficient derived from OCT axial scan is used as quantitative value of structural information. Both structural images from OCT and molecular images show spatial heterogeneity of tumors. Correlations between those values are analyzed and demonstrate that scattering coefficients are positively correlated with FMI signals in conjugated. In UEA-1 conjugated samples (8 polyps and 8 control regions), the correlation coefficient is ranged from 0.45 to 0.99. These findings indicate that the microstructure of polyps is changed gradually during cancer progression and the change is well correlated with certain molecular process. Our study demonstrated that multi-parametric imaging is able to simultaneously detect morphology and molecular information and it can enable spatially and temporally correlated studies of structure-function relationships during tumor progression.

  4. Real-time speckle reduction in optical coherence tomography using the dual window method.

    PubMed

    Zhao, Yang; Chu, Kengyeh K; Eldridge, Will J; Jelly, Evan T; Crose, Michael; Wax, Adam

    2018-02-01

    Speckle is an intrinsic noise of interferometric signals which reduces contrast and degrades the quality of optical coherence tomography (OCT) images. Here, we present a frequency compounding speckle reduction technique using the dual window (DW) method. Using the DW method, speckle noise is reduced without the need to acquire multiple frames. A ~25% improvement in the contrast-to-noise ratio (CNR) was achieved using the DW speckle reduction method with only minimal loss (~17%) in axial resolution. We also demonstrate that real-time speckle reduction can be achieved at a B-scan rate of ~21 frames per second using a graphic processing unit (GPU). The DW speckle reduction technique can work on any existing OCT instrument without further system modification or extra components. This makes it applicable both in real-time imaging systems and during post-processing.

  5. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  6. System Description and First Application of an FPGA-Based Simultaneous Multi-Frequency Electrical Impedance Tomography

    PubMed Central

    Aguiar Santos, Susana; Robens, Anne; Boehm, Anna; Leonhardt, Steffen; Teichmann, Daniel

    2016-01-01

    A new prototype of a multi-frequency electrical impedance tomography system is presented. The system uses a field-programmable gate array as a main controller and is configured to measure at different frequencies simultaneously through a composite waveform. Both real and imaginary components of the data are computed for each frequency and sent to the personal computer over an ethernet connection, where both time-difference imaging and frequency-difference imaging are reconstructed and visualized. The system has been tested for both time-difference and frequency-difference imaging for diverse sets of frequency pairs in a resistive/capacitive test unit and in self-experiments. To our knowledge, this is the first work that shows preliminary frequency-difference images of in-vivo experiments. Results of time-difference imaging were compared with simulation results and shown that the new prototype performs well at all frequencies in the tested range of 60 kHz–960 kHz. For frequency-difference images, further development of algorithms and an improved normalization process is required to correctly reconstruct and interpreted the resulting images. PMID:27463715

  7. Facilitating medical information search using Google Glass connected to a content-based medical image retrieval system.

    PubMed

    Widmer, Antoine; Schaer, Roger; Markonis, Dimitrios; Muller, Henning

    2014-01-01

    Wearable computing devices are starting to change the way users interact with computers and the Internet. Among them, Google Glass includes a small screen located in front of the right eye, a camera filming in front of the user and a small computing unit. Google Glass has the advantage to provide online services while allowing the user to perform tasks with his/her hands. These augmented glasses uncover many useful applications, also in the medical domain. For example, Google Glass can easily provide video conference between medical doctors to discuss a live case. Using these glasses can also facilitate medical information search by allowing the access of a large amount of annotated medical cases during a consultation in a non-disruptive fashion for medical staff. In this paper, we developed a Google Glass application able to take a photo and send it to a medical image retrieval system along with keywords in order to retrieve similar cases. As a preliminary assessment of the usability of the application, we tested the application under three conditions (images of the skin; printed CT scans and MRI images; and CT and MRI images acquired directly from an LCD screen) to explore whether using Google Glass affects the accuracy of the results returned by the medical image retrieval system. The preliminary results show that despite minor problems due to the relative stability of the Google Glass, images can be sent to and processed by the medical image retrieval system and similar images are returned to the user, potentially helping in the decision making process.

  8. A cost effective and high fidelity fluoroscopy simulator using the Image-Guided Surgery Toolkit (IGSTK)

    NASA Astrophysics Data System (ADS)

    Gong, Ren Hui; Jenkins, Brad; Sze, Raymond W.; Yaniv, Ziv

    2014-03-01

    The skills required for obtaining informative x-ray fluoroscopy images are currently acquired while trainees provide clinical care. As a consequence, trainees and patients are exposed to higher doses of radiation. Use of simulation has the potential to reduce this radiation exposure by enabling trainees to improve their skills in a safe environment prior to treating patients. We describe a low cost, high fidelity, fluoroscopy simulation system. Our system enables operators to practice their skills using the clinical device and simulated x-rays of a virtual patient. The patient is represented using a set of temporal Computed Tomography (CT) images, corresponding to the underlying dynamic processes. Simulated x-ray images, digitally reconstructed radiographs (DRRs), are generated from the CTs using ray-casting with customizable machine specific imaging parameters. To establish the spatial relationship between the CT and the fluoroscopy device, the CT is virtually attached to a patient phantom and a web camera is used to track the phantom's pose. The camera is mounted on the fluoroscope's intensifier and the relationship between it and the x-ray source is obtained via calibration. To control image acquisition the operator moves the fluoroscope as in normal operation mode. Control of zoom, collimation and image save is done using a keypad mounted alongside the device's control panel. Implementation is based on the Image-Guided Surgery Toolkit (IGSTK), and the use of the graphics processing unit (GPU) for accelerated image generation. Our system was evaluated by 11 clinicians and was found to be sufficiently realistic for training purposes.

  9. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, Andreu; Badano, Aldo

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less

  10. Distributed parameterization of complex terrain

    NASA Astrophysics Data System (ADS)

    Band, Lawrence E.

    1991-03-01

    This paper addresses the incorporation of high resolution topography, soils and vegetation information into the simulation of land surface processes in atmospheric circulation models (ACM). Recent work has concentrated on detailed representation of one-dimensional exchange processes, implicitly assuming surface homogeneity over the atmospheric grid cell. Two approaches that could be taken to incorporate heterogeneity are the integration of a surface model over distributed, discrete portions of the landscape, or over a distribution function of the model parameters. However, the computational burden and parameter intensive nature of current land surface models in ACM limits the number of independent model runs and parameterizations that are feasible to accomplish for operational purposes. Therefore, simplications in the representation of the vertical exchange processes may be necessary to incorporate the effects of landscape variability and horizontal divergence of energy and water. The strategy is then to trade off the detail and rigor of point exchange calculations for the ability to repeat those calculations over extensive, complex terrain. It is clear the parameterization process for this approach must be automated such that large spatial databases collected from remotely sensed images, digital terrain models and digital maps can be efficiently summarized and transformed into the appropriate parameter sets. Ideally, the landscape should be partitioned into surface units that maximize between unit variance while minimizing within unit variance, although it is recognized that some level of surface heterogeneity will be retained at all scales. Therefore, the geographic data processing necessary to automate the distributed parameterization should be able to estimate or predict parameter distributional information within each surface unit.

  11. Lithosphere-asthenosphere interaction beneath the western United States from the joint inversion of body-wave traveltimes and surface-wave phase velocities

    USGS Publications Warehouse

    Obrebski, M.; Allen, R.M.; Pollitz, F.; Hung, S.-H.

    2011-01-01

    The relation between the complex geological history of the western margin of the North American plate and the processes in the mantle is still not fully documented and understood. Several pre-USArray local seismic studies showed how the characteristics of key geological features such as the Colorado Plateau and the Yellowstone Snake River Plains are linked to their deep mantle structure. Recent body-wave models based on the deployment of the high density, large aperture USArray have provided far more details on the mantle structure while surface-wave tomography (ballistic waves and noise correlations) informs us on the shallow structure. Here we combine constraints from these two data sets to image and study the link between the geology of the western United States, the shallow structure of the Earth and the convective processes in mantle. Our multiphase DNA10-S model provides new constraints on the extent of the Archean lithosphere imaged as a large, deeply rooted fast body that encompasses the stable Great Plains and a large portion of the Northern and Central Rocky Mountains. Widespread slow anomalies are found in the lower crust and upper mantle, suggesting that low-density rocks isostatically sustain part of the high topography of the western United States. The Yellowstone anomaly is imaged as a large slow body rising from the lower mantle, intruding the overlying lithosphere and controlling locally the seismicity and the topography. The large E-W extent of the USArray used in this study allows imaging the 'slab graveyard', a sequence of Farallon fragments aligned with the currently subducting Juan de Fuca Slab, north of the Mendocino Triple Junction. The lithospheric root of the Colorado Plateau has apparently been weakened and partly removed through dripping. The distribution of the slower regions around the Colorado Plateau and other rigid blocks follows closely the trend of Cenozoic volcanic fields and ancient lithospheric sutures, suggesting that the later exert a control on the locus of magmato-tectonic activity today. The DNA velocity models are available for download and slicing at http://dna.berkeley.edu. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.

  12. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  13. Detector motion method to increase spatial resolution in photon-counting detectors

    NASA Astrophysics Data System (ADS)

    Lee, Daehee; Park, Kyeongjin; Lim, Kyung Taek; Cho, Gyuseong

    2017-03-01

    Medical imaging requires high spatial resolution of an image to identify fine lesions. Photon-counting detectors in medical imaging have recently been rapidly replacing energy-integrating detectors due to the former`s high spatial resolution, high efficiency and low noise. Spatial resolution in a photon counting image is determined by the pixel size. Therefore, the smaller the pixel size, the higher the spatial resolution that can be obtained in an image. However, detector redesigning is required to reduce pixel size, and an expensive fine process is required to integrate a signal processing unit with reduced pixel size. Furthermore, as the pixel size decreases, charge sharing severely deteriorates spatial resolution. To increase spatial resolution, we propose a detector motion method using a large pixel detector that is less affected by charge sharing. To verify the proposed method, we utilized a UNO-XRI photon-counting detector (1-mm CdTe, Timepix chip) at the maximum X-ray tube voltage of 80 kVp. A similar spatial resolution of a 55- μm-pixel image was achieved by application of the proposed method to a 110- μm-pixel detector with a higher signal-to-noise ratio. The proposed method could be a way to increase spatial resolution without a pixel redesign when pixels severely suffer from charge sharing as pixel size is reduced.

  14. On-board landmark navigation and attitude reference parallel processor system

    NASA Technical Reports Server (NTRS)

    Gilbert, L. E.; Mahajan, D. T.

    1978-01-01

    An approach to autonomous navigation and attitude reference for earth observing spacecraft is described along with the landmark identification technique based on a sequential similarity detection algorithm (SSDA). Laboratory experiments undertaken to determine if better than one pixel accuracy in registration can be achieved consistent with onboard processor timing and capacity constraints are included. The SSDA is implemented using a multi-microprocessor system including synchronization logic and chip library. The data is processed in parallel stages, effectively reducing the time to match the small known image within a larger image as seen by the onboard image system. Shared memory is incorporated in the system to help communicate intermediate results among microprocessors. The functions include finding mean values and summation of absolute differences over the image search area. The hardware is a low power, compact unit suitable to onboard application with the flexibility to provide for different parameters depending upon the environment.

  15. Image analysis of skin color heterogeneity focusing on skin chromophores and the age-related changes in facial skin.

    PubMed

    Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji

    2015-05-01

    Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Electro-optical imaging systems integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, R.

    1987-01-01

    Since the advent of high resolution, high data rate electronic sensors for military aircraft, the demands on their counterpart, the image generator hard copy output system, have increased dramatically. This has included support of direct overflight and standoff reconnaissance systems and often has required operation within a military shelter or van. The Tactical Laser Beam Recorder (TLBR) design has met the challenge each time. A third generation (TLBR) was designed and two units delivered to rapidly produce high quality wet process imagery on 5-inch film from a 5-sensor digital image signal input. A modular, in-line wet film processor is includedmore » in the total TLBR (W) system. The system features a rugged optical and transport package that requires virtually no alignment or maintenance. It has a ''Scan FIX'' capability which corrects for scanner fault errors and ''Scan LOC'' system which provides for complete phase synchronism isolation between scanner and digital image data input via strobed, 2-line digital buffers. Electronic gamma adjustment automatically compensates for variable film processing time as the film speed changes to track the sensor. This paper describes the fourth meeting of that challenge, the High Resolution Laser Beam Recorder (HRLBR) for Reconnaissance/Tactical applications.« less

  17. Developing a New North American Land Cover Product at 30m Resolution: Methods, Results and Future Plans

    NASA Astrophysics Data System (ADS)

    Homer, C.; Colditz, R. R.; Latifovic, R.; Llamas, R. M.; Pouliot, D.; Danielson, P.; Meneses, C.; Victoria, A.; Ressl, R.; Richardson, K.; Vulpescu, M.

    2017-12-01

    Land cover and land cover change information at regional and continental scales has become fundamental for studying and understanding the terrestrial environment. With recent advances in computer science and freely available image archives, continental land cover mapping has been advancing to higher spatial resolution products. The North American Land Change Monitoring System (NALCMS) remains the principal provider of seamless land cover maps of North America. Founded in 2006, this collaboration among the governments of Canada, Mexico and the United States has released two previous products based on 250m MODIS images, including a 2005 land cover and a 2005-2010 land cover change product. NALCMS has recently completed the next generation North America land cover product, based upon 30m Landsat images. This product now provides the first ever 30m land cover produced for the North American continent, providing 19 classes of seamless land cover. This presentation provides an overview of country-specific image classification processes, describes the continental map production process, provides results for the North American continent and discusses future plans. NALCMS is coordinated by the Commission for Environmental Cooperation (CEC) and all products can be obtained at their website - www.cec.org.

  18. Visual based laser speckle pattern recognition method for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Park, Kyeongtaek; Torbol, Marco

    2017-04-01

    This study performed the system identification of a target structure by analyzing the laser speckle pattern taken by a camera. The laser speckle pattern is generated by the diffuse reflection of the laser beam on a rough surface of the target structure. The camera, equipped with a red filter, records the scattered speckle particles of the laser light in real time and the raw speckle image of the pixel data is fed to the graphic processing unit (GPU) in the system. The algorithm for laser speckle contrast analysis (LASCA) computes: the laser speckle contrast images and the laser speckle flow images. The k-mean clustering algorithm is used to classify the pixels in each frame and the clusters' centroids, which function as virtual sensors, track the displacement between different frames in time domain. The fast Fourier transform (FFT) and the frequency domain decomposition (FDD) compute the modal properties of the structure: natural frequencies and damping ratios. This study takes advantage of the large scale computational capability of GPU. The algorithm is written in Compute Unifies Device Architecture (CUDA C) that allows the processing of speckle images in real time.

  19. Residential scene classification for gridded population sampling in developing countries using deep convolutional neural networks on satellite imagery.

    PubMed

    Chew, Robert F; Amer, Safaa; Jones, Kasey; Unangst, Jennifer; Cajka, James; Allpress, Justine; Bruhn, Mark

    2018-05-09

    Conducting surveys in low- and middle-income countries is often challenging because many areas lack a complete sampling frame, have outdated census information, or have limited data available for designing and selecting a representative sample. Geosampling is a probability-based, gridded population sampling method that addresses some of these issues by using geographic information system (GIS) tools to create logistically manageable area units for sampling. GIS grid cells are overlaid to partition a country's existing administrative boundaries into area units that vary in size from 50 m × 50 m to 150 m × 150 m. To avoid sending interviewers to unoccupied areas, researchers manually classify grid cells as "residential" or "nonresidential" through visual inspection of aerial images. "Nonresidential" units are then excluded from sampling and data collection. This process of manually classifying sampling units has drawbacks since it is labor intensive, prone to human error, and creates the need for simplifying assumptions during calculation of design-based sampling weights. In this paper, we discuss the development of a deep learning classification model to predict whether aerial images are residential or nonresidential, thus reducing manual labor and eliminating the need for simplifying assumptions. On our test sets, the model performs comparable to a human-level baseline in both Nigeria (94.5% accuracy) and Guatemala (96.4% accuracy), and outperforms baseline machine learning models trained on crowdsourced or remote-sensed geospatial features. Additionally, our findings suggest that this approach can work well in new areas with relatively modest amounts of training data. Gridded population sampling methods like geosampling are becoming increasingly popular in countries with outdated or inaccurate census data because of their timeliness, flexibility, and cost. Using deep learning models directly on satellite images, we provide a novel method for sample frame construction that identifies residential gridded aerial units. In cases where manual classification of satellite images is used to (1) correct for errors in gridded population data sets or (2) classify grids where population estimates are unavailable, this methodology can help reduce annotation burden with comparable quality to human analysts.

  20. Integrating research and clinical neuroimaging for the evaluation of traumatic brain injury recovery

    NASA Astrophysics Data System (ADS)

    Senseney, Justin; Ollinger, John; Graner, John; Lui, Wei; Oakes, Terry; Riedy, Gerard

    2015-03-01

    Advanced MRI research and other imaging modalities may serve as biomarkers for the evaluation of traumatic brain injury (TBI) recovery. However, these advanced modalities typically require off-line processing which creates images that are incompatible with radiologist viewing software sold commercially. AGFA Impax is an example of such a picture archiving and communication system(PACS) that is used by many radiology departments in the United States Military Health System. By taking advantage of Impax's use of the Digital Imaging and Communications in Medicine (DICOM) standard, we developed a system that allows for advanced medical imaging to be incorporated into clinical PACS. Radiology research can now be conducted using existing clinical imaging display platforms resources in combination with image processingtechniques that are only available outside of the clinical scanning environment. We extracted the spatial and identification elements of theDICOM standard that are necessary to allow research images to be incorporatedinto a clinical radiology system, and developed a tool that annotates research images with the proper tags. This allows for the evaluation of imaging representations of biological markers that may be useful in theevaluation of TBI and TBI recovery.

Top