Sample records for additional image processing

  1. Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings.

    PubMed

    Dutta, Sayon; Long, William J; Brown, David F M; Reisner, Andrew T

    2013-08-01

    As use of radiology studies increases, there is a concurrent increase in incidental findings (eg, lung nodules) for which the radiologist issues recommendations for additional imaging for follow-up. Busy emergency physicians may be challenged to carefully communicate recommendations for additional imaging not relevant to the patient's primary evaluation. The emergence of electronic health records and natural language processing algorithms may help address this quality gap. We seek to describe recommendations for additional imaging from our institution and develop and validate an automated natural language processing algorithm to reliably identify recommendations for additional imaging. We developed a natural language processing algorithm to detect recommendations for additional imaging, using 3 iterative cycles of training and validation. The third cycle used 3,235 radiology reports (1,600 for algorithm training and 1,635 for validation) of discharged emergency department (ED) patients from which we determined the incidence of discharge-relevant recommendations for additional imaging and the frequency of appropriate discharge documentation. The test characteristics of the 3 natural language processing algorithm iterations were compared, using blinded chart review as the criterion standard. Discharge-relevant recommendations for additional imaging were found in 4.5% (95% confidence interval [CI] 3.5% to 5.5%) of ED radiology reports, but 51% (95% CI 43% to 59%) of discharge instructions failed to note those findings. The final natural language processing algorithm had 89% (95% CI 82% to 94%) sensitivity and 98% (95% CI 97% to 98%) specificity for detecting recommendations for additional imaging. For discharge-relevant recommendations for additional imaging, sensitivity improved to 97% (95% CI 89% to 100%). Recommendations for additional imaging are common, and failure to document relevant recommendations for additional imaging in ED discharge instructions occurs frequently. The natural language processing algorithm's performance improved with each iteration and offers a promising error-prevention tool. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  2. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  3. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  4. Radiology image orientation processing for workstation display

    NASA Astrophysics Data System (ADS)

    Chang, Chung-Fu; Hu, Kermit; Wilson, Dennis L.

    1998-06-01

    Radiology images are acquired electronically using phosphor plates that are read in Computed Radiology (CR) readers. An automated radiology image orientation processor (RIOP) for determining the orientation for chest images and for abdomen images has been devised. In addition, the chest images are differentiated as front (AP or PA) or side (Lateral). Using the processing scheme outlined, hospitals will improve the efficiency of quality assurance (QA) technicians who orient images and prepare the images for presentation to the radiologists.

  5. Automation of Cassini Support Imaging Uplink Command Development

    NASA Technical Reports Server (NTRS)

    Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert

    2010-01-01

    "Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.

  6. SU-F-P-06: Moving From Computed Radiography to Digital Radiography: A Collaborative Approach to Improve Image Quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandoval, D; Mlady, G; Selwyn, R

    Purpose: To bring together radiologists, technologists, and physicists to utilize post-processing techniques in digital radiography (DR) in order to optimize image acquisition and improve image quality. Methods: Sub-optimal images acquired on a new General Electric (GE) DR system were flagged for follow-up by radiologists and reviewed by technologists and medical physicists. Various exam types from adult musculoskeletal (n=35), adult chest (n=4), and pediatric (n=7) were chosen for review. 673 total images were reviewed. These images were processed using five customized algorithms provided by GE. An image score sheet was created allowing the radiologist to assign a numeric score to eachmore » of the processed images, this allowed for objective comparison to the original images. Each image was scored based on seven properties: 1) overall image look, 2) soft tissue contrast, 3) high contrast, 4) latitude, 5) tissue equalization, 6) edge enhancement, 7) visualization of structures. Additional space allowed for additional comments not captured in scoring categories. Radiologists scored the images from 1 – 10 with 1 being non-diagnostic quality and 10 being superior diagnostic quality. Scores for each custom algorithm for each image set were summed. The algorithm with the highest score for each image set was then set as the default processing. Results: Images placed into the PACS “QC folder” for image processing reasons decreased. Feedback from radiologists was, overall, that image quality for these studies had improved. All default processing for these image types was changed to the new algorithm. Conclusion: This work is an example of the collaboration between radiologists, technologists, and physicists at the University of New Mexico to add value to the radiology department. The significant amount of work required to prepare the processing algorithms, reprocessing and scoring of the images was eagerly taken on by all team members in order to produce better quality images and improve patient care.« less

  7. Pixel Perfect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.

    2005-09-01

    Biologists and computer engineers at Pacific Northwest National Laboratory have specified, designed, and implemented a hardware/software system for performing real-time, multispectral image processing on a confocal microscope. This solution is intended to extend the capabilities of the microscope, enabling scientists to conduct advanced experiments on cell signaling and other kinds of protein interactions. FRET (fluorescence resonance energy transfer) techniques are used to locate and monitor protein activity. In FRET, it is critical that spectral images be precisely aligned with each other despite disturbances in the physical imaging path caused by imperfections in lenses and cameras, and expansion and contraction ofmore » materials due to temperature changes. The central importance of this work is therefore automatic image registration. This runs in a framework that guarantees real-time performance (processing pairs of 1024x1024, 8-bit images at 15 frames per second) and enables the addition of other types of advanced image processing algorithms such as image feature characterization. The supporting system architecture consists of a Visual Basic front-end containing a series of on-screen interfaces for controlling various aspects of the microscope and a script engine for automation. One of the controls is an ActiveX component written in C++ for handling the control and transfer of images. This component interfaces with a pair of LVDS image capture boards and a PCI board containing a 6-million gate Xilinx Virtex-II FPGA. Several types of image processing are performed on the FPGA in a pipelined fashion, including the image registration. The FPGA offloads work that would otherwise need to be performed by the main CPU and has a guaranteed real-time throughput. Image registration is performed in the FPGA by applying a cubic warp on one image to precisely align it with the other image. Before each experiment, an automated calibration procedure is run in order to set up the cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.« less

  8. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  9. [Imaging center - optimization of the imaging process].

    PubMed

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Assessing the use of an infrared spectrum hyperpixel array imager to measure temperature during additive and subtractive manufacturing

    NASA Astrophysics Data System (ADS)

    Whitenton, Eric; Heigel, Jarred; Lane, Brandon; Moylan, Shawn

    2016-05-01

    Accurate non-contact temperature measurement is important to optimize manufacturing processes. This applies to both additive (3D printing) and subtractive (material removal by machining) manufacturing. Performing accurate single wavelength thermography suffers numerous challenges. A potential alternative is hyperpixel array hyperspectral imaging. Focusing on metals, this paper discusses issues involved such as unknown or changing emissivity, inaccurate greybody assumptions, motion blur, and size of source effects. The algorithm which converts measured thermal spectra to emissivity and temperature uses a customized multistep non-linear equation solver to determine the best-fit emission curve. Emissivity dependence on wavelength may be assumed uniform or have a relationship typical for metals. The custom software displays residuals for intensity, temperature, and emissivity to gauge the correctness of the greybody assumption. Initial results are shown from a laser powder-bed fusion additive process, as well as a machining process. In addition, the effects of motion blur are analyzed, which occurs in both additive and subtractive manufacturing processes. In a laser powder-bed fusion additive process, the scanning laser causes the melt pool to move rapidly, causing a motion blur-like effect. In machining, measuring temperature of the rapidly moving chip is a desirable goal to develop and validate simulations of the cutting process. A moving slit target is imaged to characterize how the measured temperature values are affected by motion of a measured target.

  11. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  12. Generalized Newton Method for Energy Formulation in Image Processing

    DTIC Science & Technology

    2008-04-01

    A. Brook, N. Sochen, and N. Kiryati. Deblurring of color images corrupted by impulsive noise . IEEE Transactions on Image Processing, 16(4):1101–1111...tive functionals: variational image deblurring and geodesic active contours for image segmentation. We show that in addition to the fast convergence...inner product, active contours, deblurring . AMS subject classifications. 35A15, 65K10, 90C53 1. Introduction. Optimization of a cost functional is a

  13. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  14. Image Understanding Architecture

    DTIC Science & Technology

    1991-09-01

    architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers

  15. Forensic detection of noise addition in digital images

    NASA Astrophysics Data System (ADS)

    Cao, Gang; Zhao, Yao; Ni, Rongrong; Ou, Bo; Wang, Yongbin

    2014-03-01

    We proposed a technique to detect the global addition of noise to a digital image. As an anti-forensics tool, noise addition is typically used to disguise the visual traces of image tampering or to remove the statistical artifacts left behind by other operations. As such, the blind detection of noise addition has become imperative as well as beneficial to authenticate the image content and recover the image processing history, which is the goal of general forensics techniques. Specifically, the special image blocks, including constant and strip ones, are used to construct the features for identifying noise addition manipulation. The influence of noising on blockwise pixel value distribution is formulated and analyzed formally. The methodology of detectability recognition followed by binary decision is proposed to ensure the applicability and reliability of noising detection. Extensive experimental results demonstrate the efficacy of our proposed noising detector.

  16. Digital Image Processing Overview For Helmet Mounted Displays

    NASA Astrophysics Data System (ADS)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  17. Analyses of requirements for computer control and data processing experiment subsystems. Volume 1: ATM experiment S-056 image data processing system techniques development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The solar imaging X-ray telescope experiment (designated the S-056 experiment) is described. It will photograph the sun in the far ultraviolet or soft X-ray region. Because of the imaging characteristics of this telescope and the necessity of using special techniques for capturing images on film at these wave lengths, methods were developed for computer processing of the photographs. The problems of image restoration were addressed to develop and test digital computer techniques for applying a deconvolution process to restore overall S-056 image quality. Additional techniques for reducing or eliminating the effects of noise and nonlinearity in S-056 photographs were developed.

  18. Image analysis for maintenance of coating quality in nickel electroplating baths--real time control.

    PubMed

    Vidal, M; Amigo, J M; Bro, R; van den Berg, F; Ostra, M; Ubide, C

    2011-11-07

    The aim of this paper is to show how it is possible to extract analytical information from images acquired with a flatbed scanner and make use of this information for real time control of a nickel plating process. Digital images of plated steel sheets in a nickel bath are used to follow the process under degradation of specific additives. Dedicated software has been developed for making the obtained results accessible to process operators. This includes obtaining the RGB image, to select the red channel data exclusively, to calculate the histogram of the red channel data and to calculate the mean colour value (MCV) and the standard deviation of the red channel data. MCV is then used by the software to determine the concentration of the additives Supreme Plus Brightner (SPB) and SA-1 (for confidentiality reasons, the chemical contents cannot be further detailed) present in the bath (these two additives degrade and their concentration changes during the process). Finally, the software informs the operator when the bath is generating unsuitable quality plating and suggests the amount of SPB and SA-1 to be added in order to recover the original plating quality. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. In-Process Thermal Imaging of the Electron Beam Freeform Fabrication Process

    NASA Technical Reports Server (NTRS)

    Taminger, Karen M.; Domack, Christopher S.; Zalameda, Joseph N.; Taminger, Brian L.; Hafley, Robert A.; Burke, Eric R.

    2016-01-01

    Researchers at NASA Langley Research Center have been developing the Electron Beam Freeform Fabrication (EBF3) metal additive manufacturing process for the past 15 years. In this process, an electron beam is used as a heat source to create a small molten pool on a substrate into which wire is fed. The electron beam and wire feed assembly are translated with respect to the substrate to follow a predetermined tool path. This process is repeated in a layer-wise fashion to fabricate metal structural components. In-process imaging has been integrated into the EBF3 system using a near-infrared (NIR) camera. The images are processed to provide thermal and spatial measurements that have been incorporated into a closed-loop control system to maintain consistent thermal conditions throughout the build. Other information in the thermal images is being used to assess quality in real time by detecting flaws in prior layers of the deposit. NIR camera incorporation into the system has improved the consistency of the deposited material and provides the potential for real-time flaw detection which, ultimately, could lead to the manufacture of better, more reliable components using this additive manufacturing process.

  20. Research on pre-processing of QR Code

    NASA Astrophysics Data System (ADS)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  1. Effect of image quality on calcification detection in digital mammography

    PubMed Central

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-01-01

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Conclusions: Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection. PMID:22755704

  2. Effect of image quality on calcification detection in digital mammography.

    PubMed

    Warren, Lucy M; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M; Wallis, Matthew G; Chakraborty, Dev P; Dance, David R; Bosmans, Hilde; Young, Kenneth C

    2012-06-01

    This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection. © 2012 American Association of Physicists in Medicine.

  3. Non-rigid CT/CBCT to CBCT registration for online external beam radiotherapy guidance

    NASA Astrophysics Data System (ADS)

    Zachiu, Cornel; de Senneville, Baudouin Denis; Tijssen, Rob H. N.; Kotte, Alexis N. T. J.; Houweling, Antonetta C.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; Moonen, Chrit T. W.; Ries, Mario

    2018-01-01

    Image-guided external beam radiotherapy (EBRT) allows radiation dose deposition with a high degree of accuracy and precision. Guidance is usually achieved by estimating the displacements, via image registration, between cone beam computed tomography (CBCT) and computed tomography (CT) images acquired at different stages of the therapy. The resulting displacements are then used to reposition the patient such that the location of the tumor at the time of treatment matches its position during planning. Moreover, ongoing research aims to use CBCT-CT image registration for online plan adaptation. However, CBCT images are usually acquired using a small number of x-ray projections and/or low beam intensities. This often leads to the images being subject to low contrast, low signal-to-noise ratio and artifacts, which ends-up hampering the image registration process. Previous studies addressed this by integrating additional image processing steps into the registration procedure. However, these steps are usually designed for particular image acquisition schemes, therefore limiting their use on a case-by-case basis. In the current study we address CT to CBCT and CBCT to CBCT registration by the means of the recently proposed EVolution registration algorithm. Contrary to previous approaches, EVolution does not require the integration of additional image processing steps in the registration scheme. Moreover, the algorithm requires a low number of input parameters, is easily parallelizable and provides an elastic deformation on a point-by-point basis. Results have shown that relative to a pure CT-based registration, the intrinsic artifacts present in typical CBCT images only have a sub-millimeter impact on the accuracy and precision of the estimated deformation. In addition, the algorithm has low computational requirements, which are compatible with online image-based guidance of EBRT treatments.

  4. New Directions in the Digital Signal Processing of Image Data.

    DTIC Science & Technology

    1987-05-01

    and identify by block number) FIELD GROUP SUB-GROUP Object detection and idLntification 12 01 restoration of photon noise limited imagery 15 04 image...from incomplete information, restoration of blurred images in additive and multiplicative noise , motion analysis with fast hierarchical algorithms...different resolutions. As is well known, the solution to the matched filter problem under additive white noise conditions is the correlation receiver

  5. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    NASA Astrophysics Data System (ADS)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  6. ImagingSIMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-11-06

    ImagingSIMS is an open source application for loading, processing, manipulating and visualizing secondary ion mass spectrometry (SIMS) data. At PNNL, a separate branch has been further developed to incorporate application specific features for dynamic SIMS data sets. These include loading CAMECA IMS-1280, NanoSIMS and modified IMS-4f raw data, creating isotopic ratio images and stitching together images from adjacent interrogation regions. In addition to other modifications of the parent open source version, this version is equipped with a point-by-point image registration tool to assist with streamlining the image fusion process.

  7. Chromaticity based smoke removal in endoscopic images

    NASA Astrophysics Data System (ADS)

    Tchaka, Kevin; Pawar, Vijay M.; Stoyanov, Danail

    2017-02-01

    In minimally invasive surgery, image quality is a critical pre-requisite to ensure a surgeons ability to perform a procedure. In endoscopic procedures, image quality can deteriorate for a number of reasons such as fogging due to the temperature gradient after intra-corporeal insertion, lack of focus and due to smoke generated when using electro-cautery to dissect tissues without bleeding. In this paper we investigate the use of vision processing techniques to remove surgical smoke and improve the clarity of the image. We model the image formation process by introducing a haze medium to account for the degradation of visibility. For simplicity and computational efficiency we use an adapted dark-channel prior method combined with histogram equalization to remove smoke artifacts to recover the radiance image and enhance the contrast and brightness of the final result. Our initial results on images from robotic assisted procedures are promising and show that the proposed approach may be used to enhance image quality during surgery without additional suction devices. In addition, the processing pipeline may be used as an important part of a robust surgical vision pipeline that can continue working in the presence of smoke.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into halfmore » of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Conclusions: Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection.« less

  9. A user's guide for the signal processing software for image and speech compression developed in the Communications and Signal Processing Laboratory (CSPL), version 1

    NASA Technical Reports Server (NTRS)

    Kumar, P.; Lin, F. Y.; Vaishampayan, V.; Farvardin, N.

    1986-01-01

    A complete documentation of the software developed in the Communication and Signal Processing Laboratory (CSPL) during the period of July 1985 to March 1986 is provided. Utility programs and subroutines that were developed for a user-friendly image and speech processing environment are described. Additional programs for data compression of image and speech type signals are included. Also, programs for the zero-memory and block transform quantization in the presence of channel noise are described. Finally, several routines for simulating the perfromance of image compression algorithms are included.

  10. Multiscale image processing and antiscatter grids in digital radiography.

    PubMed

    Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D

    2009-01-01

    Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.

  11. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  12. Current status on the application of image processing of digital intraoral radiographs amongst general dental practitioners.

    PubMed

    Tohidast, Parisa; Shi, Xie-Qi

    2016-01-01

    The objectives of this study were to present the subjective knowledge level and the use of image processing on digital intraoral radiographs amongst general dental practitioners at Distriktståndvrden AB, Stockholm. A questionnaire, consisting of12 questions, was sent to 12 dental prac- tices in Stockholm. Additionally, 2000 radiographs were randomly selected from these clinics for evaluation of applied image processing and its effect on image quality. Descriptive and analytical statistical methods were applied to present the current status of the use of image proces- sing alternatives for the dentists' daily clinical work. 50 out of 53 dentists participated in the survey.The survey showed that most of dentists in.this study had received education on image processing at some stage of their career. No correlations were found between application of image processing on one side and educa- tion received with regards to image processing, previous working experience, age and gender on the other. Image processing in terms of adjusting brightness and contrast was frequently used. Overall, in this study 24.5% of the 200 images were actually image processed in practice, in which 90% of the images were improved or maintained in image quality. According to our survey, image processing is experienced to be frequently used by the dentists at Distriktstandvåden AB for diagnosing anatomical and pathological changes using intraoral radiographs. 24.5% of the 200 images were actually image processed in terms of adjusting brightness and/or contrast. In the present study we did not found that the dentists' age, gender, previous working experience and education in image processing influence their viewpoint towards the application of image processing.

  13. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review

    PubMed Central

    Sheridan, Heather; Reingold, Eyal M.

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise. PMID:29033865

  14. The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-01-01

    In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise.

  15. Techniques for Microwave Imaging.

    DTIC Science & Technology

    1981-01-18

    reduce cross-range sidelobes in tht subsequent -’ FT and the array was padd ,,d with 64 additional r,wis containing zeros . The configuration of the array is...of microwave imagery obtained by synthetic aperture processing described in reference 1-2. This type of image. generated by processing radar data...1,000 wavelengths. Althouigh these are the intended applications, the imaging methods con- sidered have general applicability to environments outside

  16. Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.12

    DTIC Science & Technology

    2015-09-03

    the Geostationary Ocean Color Imager (GOCI) sensor, aboard the Communication Ocean and Meteorological Satellite (COMS) satellite. Additionally, this...this capability works in conjunction with AOPS • Improvements to the AOPS mosaicking capability • Prepare the NRT Geostationary Ocean Color Imager...Warfare (EXW) Geostationary Ocean Color Imager (GOCI) Gulf of Mexico (GOM) Hierarchical Data Format (HDF) Integrated Data Processing System (IDPS

  17. Integrating digital topology in image-processing libraries.

    PubMed

    Lamy, Julien

    2007-01-01

    This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.

  18. Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2010-01-01

    Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.

  19. Complex noise suppression using a sparse representation and 3D filtering of images

    NASA Astrophysics Data System (ADS)

    Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Palacios-Enriquez, A.

    2017-08-01

    A novel method for the filtering of images corrupted by complex noise composed of randomly distributed impulses and additive Gaussian noise has been substantiated for the first time. The method consists of three main stages: the detection and filtering of pixels corrupted by impulsive noise, the subsequent image processing to suppress the additive noise based on 3D filtering and a sparse representation of signals in a basis of wavelets, and the concluding image processing procedure to clean the final image of the errors emerged at the previous stages. A physical interpretation of the filtering method under complex noise conditions is given. A filtering block diagram has been developed in accordance with the novel approach. Simulations of the novel image filtering method have shown an advantage of the proposed filtering scheme in terms of generally recognized criteria, such as the structural similarity index measure and the peak signal-to-noise ratio, and when visually comparing the filtered images.

  20. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  1. Software for Verifying Image-Correlation Tie Points

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Yagi, Gary

    2008-01-01

    A computer program enables assessment of the quality of tie points in the image-correlation processes of the software described in the immediately preceding article. Tie points are computed in mappings between corresponding pixels in the left and right images of a stereoscopic pair. The mappings are sometimes not perfect because image data can be noisy and parallax can cause some points to appear in one image but not the other. The present computer program relies on the availability of a left- right correlation map in addition to the usual right left correlation map. The additional map must be generated, which doubles the processing time. Such increased time can now be afforded in the data-processing pipeline, since the time for map generation is now reduced from about 60 to 3 minutes by the parallelization discussed in the previous article. Parallel cluster processing time, therefore, enabled this better science result. The first mapping is typically from a point (denoted by coordinates x,y) in the left image to a point (x',y') in the right image. The second mapping is from (x',y') in the right image to some point (x",y") in the left image. If (x,y) and(x",y") are identical, then the mapping is considered perfect. The perfect-match criterion can be relaxed by introducing an error window that admits of round-off error and a small amount of noise. The mapping procedure can be repeated until all points in each image not connected to points in the other image are eliminated, so that what remains are verified correlation data.

  2. Real-time co-registered ultrasound and photoacoustic imaging system based on FPGA and DSP architecture

    NASA Astrophysics Data System (ADS)

    Alqasemi, Umar; Li, Hai; Aguirre, Andres; Zhu, Quing

    2011-03-01

    Co-registering ultrasound (US) and photoacoustic (PA) imaging is a logical extension to conventional ultrasound because both modalities provide complementary information of tumor morphology, tumor vasculature and hypoxia for cancer detection and characterization. In addition, both modalities are capable of providing real-time images for clinical applications. In this paper, a Field Programmable Gate Array (FPGA) and Digital Signal Processor (DSP) module-based real-time US/PA imaging system is presented. The system provides real-time US/PA data acquisition and image display for up to 5 fps* using the currently implemented DSP board. It can be upgraded to 15 fps, which is the maximum pulse repetition rate of the used laser, by implementing an advanced DSP module. Additionally, the photoacoustic RF data for each frame is saved for further off-line processing. The system frontend consists of eight 16-channel modules made of commercial and customized circuits. Each 16-channel module consists of two commercial 8-channel receiving circuitry boards and one FPGA board from Analog Devices. Each receiving board contains an IC† that combines. 8-channel low-noise amplifiers, variable-gain amplifiers, anti-aliasing filters, and ADC's‡ in a single chip with sampling frequency of 40MHz. The FPGA board captures the LVDSξ Double Data Rate (DDR) digital output of the receiving board and performs data conditioning and subbeamforming. A customized 16-channel transmission circuitry is connected to the two receiving boards for US pulseecho (PE) mode data acquisition. A DSP module uses External Memory Interface (EMIF) to interface with the eight 16-channel modules through a customized adaptor board. The DSP transfers either sub-beamformed data (US pulse-echo mode or PAI imaging mode) or raw data from FPGA boards to its DDR-2 memory through the EMIF link, then it performs additional processing, after that, it transfer the data to the PC** for further image processing. The PC code performs image processing including demodulation, beam envelope detection and scan conversion. Additionally, the PC code pre-calculates the delay coefficients used for transmission focusing and receiving dynamic focusing for different types of transducers to speed up the imaging process. To further speed up the imaging process, a multi-threads technique is implemented in order to allow formation of previous image frame data and acquisition of the next one simultaneously. The system is also capable of doing semi-real-time automated SO2 imaging at 10 seconds per frame by changing the wavelength knob of the laser automatically using a stepper motor controlled by the system. Initial in vivo experiments were performed on animal tumors to map out its vasculature and hypoxia level, which were superimposed on co-registered US images. The real-time system allows capturing co-registered US/PA images free of motion artifacts and also provides dynamitic information when contrast agents are used.

  3. Automatic Assessment and Reduction of Noise using Edge Pattern Analysis in Non-Linear Image Enhancement

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.

    2004-01-01

    Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.

  4. Automatic detection of blurred images in UAV image sets

    NASA Astrophysics Data System (ADS)

    Sieberth, Till; Wackrow, Rene; Chandler, Jim H.

    2016-12-01

    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.

  5. SOI-CMOS Process for Monolithic, Radiation-Tolerant, Science-Grade Imagers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, George; Lee, Adam

    In Phase I, Voxtel worked with Jazz and Sandia to document and simulate the processes necessary to implement a DH-BSI SOI CMOS imaging process. The development is based upon mature SOI CMOS process at both fabs, with the addition of only a few custom processing steps for integration and electrical interconnection of the fully-depleted photodetectors. In Phase I, Voxtel also characterized the Sandia process, including the CMOS7 design rules, and we developed the outline of a process option that included a “BOX etch”, that will permit a “detector in handle” SOI CMOS process to be developed The process flows weremore » developed in cooperation with both Jazz and Sandia process engineers, along with detailed TCAD modeling and testing of the photodiode array architectures. In addition, Voxtel tested the radiation performance of the Jazz’s CA18HJ process, using standard and circular-enclosed transistors.« less

  6. Additive Manufacturing Infrared Inspection

    NASA Technical Reports Server (NTRS)

    Gaddy, Darrell; Nettles, Mindy

    2015-01-01

    The Additive Manufacturing Infrared Inspection Task started the development of a real-time dimensional inspection technique and digital quality record for the additive manufacturing process using infrared camera imaging and processing techniques. This project will benefit additive manufacturing by providing real-time inspection of internal geometry that is not currently possible and reduce the time and cost of additive manufactured parts with automated real-time dimensional inspections which deletes post-production inspections.

  7. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    PubMed

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, butmore » requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.« less

  9. Advanced Secure Optical Image Processing for Communications

    NASA Astrophysics Data System (ADS)

    Al Falou, Ayman

    2018-04-01

    New image processing tools and data-processing network systems have considerably increased the volume of transmitted information such as 2D and 3D images with high resolution. Thus, more complex networks and long processing times become necessary, and high image quality and transmission speeds are requested for an increasing number of applications. To satisfy these two requests, several either numerical or optical solutions were offered separately. This book explores both alternatives and describes research works that are converging towards optical/numerical hybrid solutions for high volume signal and image processing and transmission. Without being limited to hybrid approaches, the latter are particularly investigated in this book in the purpose of combining the advantages of both techniques. Additionally, pure numerical or optical solutions are also considered since they emphasize the advantages of one of the two approaches separately.

  10. Counting Craters on MOC Images: Production Functions and Other Complications

    NASA Technical Reports Server (NTRS)

    Plaut, J. J.

    2001-01-01

    New crater counts on MOC images and associated Viking Orbiter images are used to address the issue of the crater production function at Mars, and to infer aspects of resurfacing processes. Additional information is contained in the original extended abstract.

  11. Imaging of thymic disorders

    PubMed Central

    Bogot, Naama R; Quint, Leslie E

    2005-01-01

    Evaluation of the thymus poses a challenge to the radiologist. In addition to age-related changes in thymic size, shape, and tissue composition, there is considerable variability in the normal adult thymic appearance within any age group. Many different types of disorders may affect the thymus, including hyperplasia, cysts, and benign and malignant neoplasms, both primary and secondary; clinical and imaging findings typical for each disease process are described in this article. Whereas computed tomography is the mainstay for imaging the thymus, other imaging modalities may occasionally provide additional structural or functional information. PMID:16361143

  12. Computer image processing - The Viking experience. [digital enhancement techniques

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  13. Identifying regions of interest in medical images using self-organizing maps.

    PubMed

    Teng, Wei-Guang; Chang, Ping-Lin

    2012-10-01

    Advances in data acquisition, processing and visualization techniques have had a tremendous impact on medical imaging in recent years. However, the interpretation of medical images is still almost always performed by radiologists. Developments in artificial intelligence and image processing have shown the increasingly great potential of computer-aided diagnosis (CAD). Nevertheless, it has remained challenging to develop a general approach to process various commonly used types of medical images (e.g., X-ray, MRI, and ultrasound images). To facilitate diagnosis, we recommend the use of image segmentation to discover regions of interest (ROI) using self-organizing maps (SOM). We devise a two-stage SOM approach that can be used to precisely identify the dominant colors of a medical image and then segment it into several small regions. In addition, by appropriately conducting the recursive merging steps to merge smaller regions into larger ones, radiologists can usually identify one or more ROIs within a medical image.

  14. Content standards for medical image metadata

    NASA Astrophysics Data System (ADS)

    d'Ornellas, Marcos C.; da Rocha, Rafael P.

    2003-12-01

    Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.

  15. Interactive Image Analysis System Design,

    DTIC Science & Technology

    1982-12-01

    This report describes a design for an interactive image analysis system (IIAS), which implements terrain data extraction techniques. The design... analysis system. Additionally, the system is fully capable of supporting many generic types of image analysis and data processing, and is modularly...employs commercially available, state of the art minicomputers and image display devices with proven software to achieve a cost effective, reliable image

  16. Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Bila, Z.; Reznicek, J.; Pavelka, K.

    2013-07-01

    This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.

  17. Inverse scattering and refraction corrected reflection for breast cancer imaging

    NASA Astrophysics Data System (ADS)

    Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John

    2010-03-01

    Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.

  18. Lipase production in solid-state fermentation monitoring biomass growth of aspergillus niger using digital image processing.

    PubMed

    Dutra, Júlio C V; da C Terzi, Selma; Bevilaqua, Juliana Vaz; Damaso, Mônica C T; Couri, Sônia; Langone, Marta A P; Senna, Lilian F

    2008-03-01

    The aim of this study was to monitor the biomass growth of Aspergillus niger in solid-state fermentation (SSF) for lipase production using digital image processing technique. The strain A. niger 11T53A14 was cultivated in SSF using wheat bran as support, which was enriched with 0.91% (m/v) of ammonium sulfate. The addition of several vegetable oils (castor, soybean, olive, corn, and palm oils) was investigated to enhance lipase production. The maximum lipase activity was obtained using 2% (m/m) castor oil. In these conditions, the growth was evaluated each 24 h for 5 days by the glycosamine content analysis and digital image processing. Lipase activity was also determined. The results indicated that the digital image process technique can be used to monitor biomass growth in a SSF process and to correlate biomass growth and enzyme activity. In addition, the immobilized esterification lipase activity was determined for the butyl oleate synthesis, with and without 50% v/v hexane, resulting in 650 and 120 U/g, respectively. The enzyme was also used for transesterification of soybean oil and ethanol with maximum yield of 2.4%, after 30 min of reaction.

  19. Lipase Production in Solid-State Fermentation Monitoring Biomass Growth of Aspergillus niger Using Digital Image Processing

    NASA Astrophysics Data System (ADS)

    Dutra, Julio C. V.; da Terzi, Selma C.; Bevilaqua, Juliana Vaz; Damaso, Mônica C. T.; Couri, Sônia; Langone, Marta A. P.; Senna, Lilian F.

    The aim of this study was to monitor the biomass growth of Aspergillus niger in solid-state fermentation (SSF) for lipase production using digital image processing technique. The strain A. niger 11T53A14 was cultivated in SSF using wheat bran as support, which was enriched with 0.91% (m/v) of ammonium sulfate. The addition of several vegetable oils (castor, soybean, olive, corn, and palm oils) was investigated to enhance lipase production. The maximum lipase activity was obtained using 2% (m/m) castor oil. In these conditions, the growth was evaluated each 24 h for 5 days by the glycosamine content analysis and digital image processing. Lipase activity was also determined. The results indicated that the digital image process technique can be used to monitor biomass growth in a SSF process and to correlate biomass growth and enzyme activity. In addition, the immobilized esterification lipase activity was determined for the butyl oleate synthesis, with and without 50% v/v hexane, resulting in 650 and 120 U/g, respectively. The enzyme was also used for transesterification of soybean oil and ethanol with maximum yield of 2.4%, after 30 min of reaction.

  20. Fast and Accurate Cell Tracking by a Novel Optical-Digital Hybrid Method

    NASA Astrophysics Data System (ADS)

    Torres-Cisneros, M.; Aviña-Cervantes, J. G.; Pérez-Careta, E.; Ambriz-Colín, F.; Tinoco, Verónica; Ibarra-Manzano, O. G.; Plascencia-Mora, H.; Aguilera-Gómez, E.; Ibarra-Manzano, M. A.; Guzman-Cabrera, R.; Debeir, Olivier; Sánchez-Mondragón, J. J.

    2013-09-01

    An innovative methodology to detect and track cells using microscope images enhanced by optical cross-correlation techniques is proposed in this paper. In order to increase the tracking sensibility, image pre-processing has been implemented as a morphological operator on the microscope image. Results show that the pre-processing process allows for additional frames of cell tracking, therefore increasing its robustness. The proposed methodology can be used in analyzing different problems such as mitosis, cell collisions, and cell overlapping, ultimately designed to identify and treat illnesses and malignancies.

  1. Real-Time Processing of Pressure-Sensitive Paint Images

    DTIC Science & Technology

    2006-12-01

    intermediate or final data to the hard disk in 3D grid format. In addition to the pressure or pressure coefficient at every grid point, the saved file may...occurs. Nevertheless, to achieve an accurate mapping between 2D image coordinates and 3D spatial coordinates, additional parameters must be introduced. A...improved mapping between the 2D and 3D coordinates. In a more sophisticated approach, additional terms corresponding to specific deformation modes

  2. SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Yang, D

    2015-06-15

    Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less

  3. Laser-assisted nanomaterial deposition, nanomanufacturing, in situ monitoring and associated apparatus

    DOEpatents

    Mao, Samuel S; Grigoropoulos, Costas P; Hwang, David J; Minor, Andrew M

    2013-11-12

    Laser-assisted apparatus and methods for performing nanoscale material processing, including nanodeposition of materials, can be controlled very precisely to yield both simple and complex structures with sizes less than 100 nm. Optical or thermal energy in the near field of a photon (laser) pulse is used to fabricate submicron and nanometer structures on a substrate. A wide variety of laser material processing techniques can be adapted for use including, subtractive (e.g., ablation, machining or chemical etching), additive (e.g., chemical vapor deposition, selective self-assembly), and modification (e.g., phase transformation, doping) processes. Additionally, the apparatus can be integrated into imaging instruments, such as SEM and TEM, to allow for real-time imaging of the material processing.

  4. Single-image hard-copy display of the spine utilizing digital radiography

    NASA Astrophysics Data System (ADS)

    Artz, Dorothy S.; Janchar, Timothy; Milzman, David; Freedman, Matthew T.; Mun, Seong K.

    1997-04-01

    Regions of the entire spine contain a wide latitude of tissue densities within the imaged field of view presenting a problem for adequate radiological evaluation. With screen/film technology, the optimal technique for one area of the radiograph is sub-optimal for another area. Computed radiography (CR) with its inherent wide dynamic range, has been shown to be better than screen/film for lateral cervical spine imaging, but limitations are still present with standard image processing. By utilizing a dynamic range control (DRC) algorithm based on unsharp masking and signal transformation prior to gradation and frequency processing within the CR system, more vertebral bodies can be seen on a single hard copy display of the lateral cervical, thoracic, and thoracolumbar examinations. Examinations of the trauma cross-table lateral cervical spine, lateral thoracic spine, and lateral thoracolumbar spine were collected on live patient using photostimulable storage phosphor plates, the Fuji FCR 9000 reader, and the Fuji AC-3 computed radiography reader. Two images were produced from a single exposure; one with standard image processing and the second image with the standard process and the additional DRC algorithm. Both sets were printed from a Fuji LP 414 laser printer. Two different DRC algorithms were applied depending on which portion of the spine was not well visualized. One algorithm increased optical density and the second algorithm decreased optical density. The resultant image pairs were then reviewed by a panel of radiologists. Images produced with the additional DRC algorithm demonstrated improved visualization of previously 'under exposed' and 'over exposed' regions within the same image. Where lung field had previously obscured bony detail of the lateral thoracolumbar spine due to 'over exposure,' the image with the DRC applied to decrease the optical density allowed for easy visualization of the entire area of interest. For areas of the lateral cervical spine and lateral thoracic spine that typically have a low optical density value, the DRC algorithm used increased the optical density over that region improving visualization of C7-T2 and T11-L2 vertebral bodies; critical in trauma radiography. Emergency medicine physicians also reviewing the lateral cervical spine images were able to clear 37% of the DRC images compared to 30% of the non-DRC images for removal of the cervical collar. The DRC processed images reviewed by the physicians do not have a typical screen/film appearance; however, these different images were preferred for the three examinations in this study. This method of image processing after being tested and accepted, is in use clinically at Georgetown University Medical Center Department of Radiology for the following examinations: cervical spine, lateral thoracic spine, lateral thoracolumbar examinations, facial bones, shoulder, sternum, feet and portable chest. Computed radiography imaging of the spine is improved with the addition of histogram equalization known as dynamic range control (DRC). More anatomical structures are visualized on a single hard copy display.

  5. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  6. Protocols for Image Processing based Underwater Inspection of Infrastructure Elements

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; Pakrashi, Vikram

    2015-07-01

    Image processing can be an important tool for inspecting underwater infrastructure elements like bridge piers and pile wharves. Underwater inspection often relies on visual descriptions of divers who are not necessarily trained in specifics of structural degradation and the information may often be vague, prone to error or open to significant variation of interpretation. Underwater vehicles, on the other hand can be quite expensive to deal with for such inspections. Additionally, there is now significant encouragement globally towards the deployment of more offshore renewable wind turbines and wave devices and the requirement for underwater inspection can be expected to increase significantly in the coming years. While the merit of image processing based assessment of the condition of underwater structures is understood to a certain degree, there is no existing protocol on such image based methods. This paper discusses and describes an image processing protocol for underwater inspection of structures. A stereo imaging image processing method is considered in this regard and protocols are suggested for image storage, imaging, diving, and inspection. A combined underwater imaging protocol is finally presented which can be used for a variety of situations within a range of image scenes and environmental conditions affecting the imaging conditions. An example of detecting marine growth is presented of a structure in Cork Harbour, Ireland.

  7. A comparison of image processing techniques for bird recognition.

    PubMed

    Nadimpalli, Uma D; Price, Randy R; Hall, Steven G; Bomma, Pallavi

    2006-01-01

    Bird predation is one of the major concerns for fish culture in open ponds. A novel method for dispersing birds is the use of autonomous vehicles. Image recognition software can improve their efficiency. Several image processing techniques for recognition of birds have been tested. A series of morphological operations were implemented. We divided images into 3 types, Type 1, Type 2, and Type 3, based on the level of difficulty of recognizing birds. Type 1 images were clear; Type 2 images were medium clear, and Type 3 images were unclear. Local thresholding has been implemented using HSV (Hue, Saturation, and Value), GRAY, and RGB (Red, Green, and Blue) color models on all three sections of images and results were tabulated. Template matching using normal correlation and artificial neural networks (ANN) are the other methods that have been developed in this study in addition to image morphology. Template matching produced satisfactory results irrespective of the difficulty level of images, but artificial neural networks produced accuracies of 100, 60, and 50% on Type 1, Type 2, and Type 3 images, respectively. Correct classification rate can be increased by further training. Future research will focus on testing the recognition algorithms in natural or aquacultural settings on autonomous boats. Applications of such techniques to industrial, agricultural, or related areas are additional future possibilities.

  8. An instrument for in situ time-resolved X-ray imaging and diffraction of laser powder bed fusion additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.; Martin, Aiden A.; Depond, Philip J.; Guss, Gabriel M.; Thampy, Vivek; Fong, Anthony Y.; Weker, Johanna Nelson; Stone, Kevin H.; Tassone, Christopher J.; Kramer, Matthew J.; Toney, Michael F.; Van Buuren, Anthony; Matthews, Manyalibo J.

    2018-05-01

    In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at the Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ˜1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ˜50 × 100 μm area. We also discuss the utility of these measurements for model validation and process improvement.

  9. An instrument for in situ time-resolved X-ray imaging and diffraction of laser powder bed fusion additive manufacturing processes.

    PubMed

    Calta, Nicholas P; Wang, Jenny; Kiss, Andrew M; Martin, Aiden A; Depond, Philip J; Guss, Gabriel M; Thampy, Vivek; Fong, Anthony Y; Weker, Johanna Nelson; Stone, Kevin H; Tassone, Christopher J; Kramer, Matthew J; Toney, Michael F; Van Buuren, Anthony; Matthews, Manyalibo J

    2018-05-01

    In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at the Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ∼1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ∼50 × 100 μm area. We also discuss the utility of these measurements for model validation and process improvement.

  10. An instrument for in situ time-resolved X-ray imaging and diffraction of laser powder bed fusion additive manufacturing processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.

    In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at themore » Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ~1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ~50 × 100 μm area. In conclusion, we also discuss the utility of these measurements for model validation and process improvement.« less

  11. An instrument for in situ time-resolved X-ray imaging and diffraction of laser powder bed fusion additive manufacturing processes

    DOE PAGES

    Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.; ...

    2018-05-01

    In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at themore » Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ~1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ~50 × 100 μm area. In conclusion, we also discuss the utility of these measurements for model validation and process improvement.« less

  12. Implementing An Image Understanding System Architecture Using Pipe

    NASA Astrophysics Data System (ADS)

    Luck, Randall L.

    1988-03-01

    This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.

  13. Retinal imaging analysis based on vessel detection.

    PubMed

    Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila

    2017-07-01

    With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art. © 2017 Wiley Periodicals, Inc.

  14. Focal-Plane Sensing-Processing: A Power-Efficient Approach for the Implementation of Privacy-Aware Networked Visual Sensors

    PubMed Central

    Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel

    2014-01-01

    The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849

  15. Focal-plane sensing-processing: a power-efficient approach for the implementation of privacy-aware networked visual sensors.

    PubMed

    Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel

    2014-08-19

    The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.

  16. Real-Time X-ray Imaging Reveals Interfacial Growth, Suppression, and Dissolution of Zinc Dendrites Dependent on Anions of Ionic Liquid Additives for Rechargeable Battery Applications.

    PubMed

    Song, Yuexian; Hu, Jiugang; Tang, Jia; Gu, Wanmiao; He, Lili; Ji, Xiaobo

    2016-11-23

    The dynamic interfacial growth, suppression, and dissolution of zinc dendrites have been studied with the imidazolium ionic liquids (ILs) as additives on the basis of in situ synchrotron radiation X-ray imaging. The phase contrast difference of real-time images indicates that zinc dendrites are preferentially developed on the substrate surface in the ammoniacal electrolytes. After adding imidazolium ILs, both nucleation overpotential and polarization extent increase in the order of additive-free < EMI-Cl < EMI-PF 6 < EMI-TFSA < EMI-DCA. The real-time X-ray images show that the EMI-Cl can suppress zinc dendrites, but result in the formation of the loose deposits. The EMI-PF 6 and EMI-TFSA additives can smooth the deposit morphology through suppressing the initiation and growth of dendritic zinc. The addition of EMI-DCA increases the number of dendrite initiation sites, whereas it decreases the growth rate of dendrites. Furthermore, the dissolution behaviors of zinc deposits are compared. The zinc dendrites show a slow dissolution process in the additive-free electrolyte, whereas zinc deposits are easily detached from the substrate in the presence of EMI-Cl, EMI-PF 6 , or EMI-TFSA due to the formation of the loose structure. Hence, the dependence of zinc dendrites on anions of imidazolium IL additives during both electrodeposition and dissolution processes has been elucidated. These results could provide the valuable information in perfecting the performance of zinc-based rechargeable batteries.

  17. SPARX, a new environment for Cryo-EM image processing.

    PubMed

    Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

    2007-01-01

    SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.

  18. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  19. Nonlinear coherent optical image processing using logarithmic transmittance of bacteriorhodopsin films

    NASA Astrophysics Data System (ADS)

    Downie, John D.

    1995-08-01

    The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.

  20. Nonlinear Coherent Optical Image Processing Using Logarithmic Transmittance of Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.

  1. Respiratory motion correction in emission tomography image reconstruction.

    PubMed

    Reyes, Mauricio; Malandain, Grégoire; Koulibaly, Pierre Malick; González Ballester, Miguel A; Darcourt, Jacques

    2005-01-01

    In Emission Tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations and imprecise diagnosis. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested with improvements over the spatial activity distribution in lungs lesions, but with the disadvantages of requiring additional instrumentation or discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion correction directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the Maximum Likelihood Expectation Maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data.

  2. Symmetric Phase Only Filtering for Improved DPIV Data Processing

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    2006-01-01

    The standard approach in Digital Particle Image Velocimetry (DPIV) data processing is to use Fast Fourier Transforms to obtain the cross-correlation of two single exposure subregions, where the location of the cross-correlation peak is representative of the most probable particle displacement across the subregion. This standard DPIV processing technique is analogous to Matched Spatial Filtering, a technique commonly used in optical correlators to perform the crosscorrelation operation. Phase only filtering is a well known variation of Matched Spatial Filtering, which when used to process DPIV image data yields correlation peaks which are narrower and up to an order of magnitude larger than those obtained using traditional DPIV processing. In addition to possessing desirable correlation plane features, phase only filters also provide superior performance in the presence of DC noise in the correlation subregion. When DPIV image subregions contaminated with surface flare light or high background noise levels are processed using phase only filters, the correlation peak pertaining only to the particle displacement is readily detected above any signal stemming from the DC objects. Tedious image masking or background image subtraction are not required. Both theoretical and experimental analyses of the signal-to-noise ratio performance of the filter functions are presented. In addition, a new Symmetric Phase Only Filtering (SPOF) technique, which is a variation on the traditional phase only filtering technique, is described and demonstrated. The SPOF technique exceeds the performance of the traditionally accepted phase only filtering techniques and is easily implemented in standard DPIV FFT based correlation processing with no significant computational performance penalty. An "Automatic" SPOF algorithm is presented which determines when the SPOF is able to provide better signal to noise results than traditional PIV processing. The SPOF based optical correlation processing approach is presented as a new paradigm for more robust cross-correlation processing of low signal-to-noise ratio DPIV image data."

  3. The AAPM/RSNA physics tutorial for residents: digital fluoroscopy.

    PubMed

    Pooley, R A; McKinney, J M; Miller, D A

    2001-01-01

    A digital fluoroscopy system is most commonly configured as a conventional fluoroscopy system (tube, table, image intensifier, video system) in which the analog video signal is converted to and stored as digital data. Other methods of acquiring the digital data (eg, digital or charge-coupled device video and flat-panel detectors) will become more prevalent in the future. Fundamental concepts related to digital imaging in general include binary numbers, pixels, and gray levels. Digital image data allow the convenient use of several image processing techniques including last image hold, gray-scale processing, temporal frame averaging, and edge enhancement. Real-time subtraction of digital fluoroscopic images after injection of contrast material has led to widespread use of digital subtraction angiography (DSA). Additional image processing techniques used with DSA include road mapping, image fade, mask pixel shift, frame summation, and vessel size measurement. Peripheral angiography performed with an automatic moving table allows imaging of the peripheral vasculature with a single contrast material injection.

  4. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography).

    PubMed

    Siegel, Nisan; Storrie, Brian; Bruce, Marc; Brooker, Gary

    2015-02-07

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called "CINCH". An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution.

  5. The application of a unique flow modeling technique to complex combustion systems

    NASA Astrophysics Data System (ADS)

    Waslo, J.; Hasegawa, T.; Hilt, M. B.

    1986-06-01

    This paper describes the application of a unique three-dimensional water flow modeling technique to the study of complex fluid flow patterns within an advanced gas turbine combustor. The visualization technique uses light scattering, coupled with real-time image processing, to determine flow fields. Additional image processing is used to make concentration measurements within the combustor.

  6. Overlay metrology for double patterning processes

    NASA Astrophysics Data System (ADS)

    Leray, Philippe; Cheng, Shaunee; Laidler, David; Kandel, Daniel; Adel, Mike; Dinu, Berta; Polli, Marco; Vasconi, Mauro; Salski, Bartlomiej

    2009-03-01

    The double patterning (DPT) process is foreseen by the industry to be the main solution for the 32 nm technology node and even beyond. Meanwhile process compatibility has to be maintained and the performance of overlay metrology has to improve. To achieve this for Image Based Overlay (IBO), usually the optics of overlay tools are improved. It was also demonstrated that these requirements are achievable with a Diffraction Based Overlay (DBO) technique named SCOLTM [1]. In addition, we believe that overlay measurements with respect to a reference grid are required to achieve the required overlay control [2]. This induces at least a three-fold increase in the number of measurements (2 for double patterned layers to the reference grid and 1 between the double patterned layers). The requirements of process compatibility, enhanced performance and large number of measurements make the choice of overlay metrology for DPT very challenging. In this work we use different flavors of the standard overlay metrology technique (IBO) as well as the new technique (SCOL) to address these three requirements. The compatibility of the corresponding overlay targets with double patterning processes (Litho-Etch-Litho-Etch (LELE); Litho-Freeze-Litho-Etch (LFLE), Spacer defined) is tested. The process impact on different target types is discussed (CD bias LELE, Contrast for LFLE). We compare the standard imaging overlay metrology with non-standard imaging techniques dedicated to double patterning processes (multilayer imaging targets allowing one overlay target instead of three, very small imaging targets). In addition to standard designs already discussed [1], we investigate SCOL target designs specific to double patterning processes. The feedback to the scanner is determined using the different techniques. The final overlay results obtained are compared accordingly. We conclude with the pros and cons of each technique and suggest the optimal metrology strategy for overlay control in double patterning processes.

  7. The Socio-Moral Image Database (SMID): A novel stimulus set for the study of social, moral and affective processes.

    PubMed

    Crone, Damien L; Bode, Stefan; Murawski, Carsten; Laham, Simon M

    2018-01-01

    A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/.

  8. Development of Realistic Striatal Digital Brain (SDB) Phantom for 123I-FP-CIT SPECT and Effect on Ventricle in the Brain for Semi-quantitative Index of Specific Binding Ratio.

    PubMed

    Furuta, Akihiro; Onishi, Hideo; Nakamoto, Kenta

    This study aimed at developing the realistic striatal digital brain (SDB) phantom and to assess specific binding ratio (SBR) for ventricular effect in the 123 I-FP-CIT SPECT imaging. SDB phantom was constructed in to four segments (striatum, ventricle, brain parenchyma, and skull bone) using Percentile method and other image processing in the T2-weighted MR images. The reference image was converted into 128×128 matrixes to align MR images with SPECT images. The process image was reconstructed with projection data sets generated from reference images additive blurring, attenuation, scatter, and statically noise. The SDB phantom was evaluated to find the accuracy of calculated SBR and to find the effect of SBR with/without ventricular counts with the reference and process images. We developed and investigated the utility of the SDB phantom in the 123 I-FP-CIT SPECT clinical study. The true value of SBR was just marched to calculate SBR from reference and process images. The SBR was underestimated 58.0% with ventricular counts in reference image, however, was underestimated 162% with ventricular counts in process images. The SDB phantom provides an extremely convenient tool for discovering basic properties of 123 I-FP-CIT SPECT clinical study image. It was suggested that the SBR was susceptible to ventricle.

  9. Computational analysis of Pelton bucket tip erosion using digital image processing

    NASA Astrophysics Data System (ADS)

    Shrestha, Bim Prasad; Gautam, Bijaya; Bajracharya, Tri Ratna

    2008-03-01

    Erosion of hydro turbine components through sand laden river is one of the biggest problems in Himalayas. Even with sediment trapping systems, complete removal of fine sediment from water is impossible and uneconomical; hence most of the turbine components in Himalayan Rivers are exposed to sand laden water and subject to erode. Pelton bucket which are being wildly used in different hydropower generation plant undergoes erosion on the continuous presence of sand particles in water. The subsequent erosion causes increase in splitter thickness, which is supposed to be theoretically zero. This increase in splitter thickness gives rise to back hitting of water followed by decrease in turbine efficiency. This paper describes the process of measurement of sharp edges like bucket tip using digital image processing. Image of each bucket is captured and allowed to run for 72 hours; sand concentration in water hitting the bucket is closely controlled and monitored. Later, the image of the test bucket is taken in the same condition. The process is repeated for 10 times. In this paper digital image processing which encompasses processes that performs image enhancement in both spatial and frequency domain. In addition, the processes that extract attributes from images, up to and including the measurement of splitter's tip. Processing of image has been done in MATLAB 6.5 platform. The result shows that quantitative measurement of edge erosion of sharp edges could accurately be detected and the erosion profile could be generated using image processing technique.

  10. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  11. Radar image and data fusion for natural hazards characterisation

    USGS Publications Warehouse

    Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Jixian; Zhang, Yonghong

    2010-01-01

    Fusion of synthetic aperture radar (SAR) images through interferometric, polarimetric and tomographic processing provides an all - weather imaging capability to characterise and monitor various natural hazards. This article outlines interferometric synthetic aperture radar (InSAR) processing and products and their utility for natural hazards characterisation, provides an overview of the techniques and applications related to fusion of SAR/InSAR images with optical and other images and highlights the emerging SAR fusion technologies. In addition to providing precise land - surface digital elevation maps, SAR - derived imaging products can map millimetre - scale elevation changes driven by volcanic, seismic and hydrogeologic processes, by landslides and wildfires and other natural hazards. With products derived from the fusion of SAR and other images, scientists can monitor the progress of flooding, estimate water storage changes in wetlands for improved hydrological modelling predictions and assessments of future flood impacts and map vegetation structure on a global scale and monitor its changes due to such processes as fire, volcanic eruption and deforestation. With the availability of SAR images in near real - time from multiple satellites in the near future, the fusion of SAR images with other images and data is playing an increasingly important role in understanding and forecasting natural hazards.

  12. Color Facsimile.

    DTIC Science & Technology

    1995-02-01

    modification of existing JPEG compression and decompression software available from Independent JPEG Users Group to process CIELAB color images and to use...externally specificed Huffman tables. In addition a conversion program was written to convert CIELAB color space images to red, green, blue color space

  13. Use of ERTS data for a multidisciplinary analysis of Michigan resources. [forests, agriculture, soils, and landforms

    NASA Technical Reports Server (NTRS)

    Andersen, A. L.; Myers, W. L.; Safir, G.; Whiteside, E. P. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. The results of this investigation of ratioing simulated ERTS spectral bands and several non-ERTS bands (all collected by an airborne multispectral scanner) indicate that significant terrain information is available from band-ratio images. Ratio images, which are based on the relative spectral changes which occur from one band to another, are useful for enhancing differences and aiding the image interpreter in identifying and mapping the distribution of such terrain elements as seedling crops, all bare soil, organic soil, mineral soil, forest and woodlots, and marsh areas. In addition, the ratio technique may be useful for computer processing to obtain recognition images of large areas at lower costs than with statistical decision rules. The results of this study of ratio processing of aircraft MSS data will be useful for future processing and evaluation of ERTS-1 data for soil and landform studies. Additionally, the results of ratioing spectral bands other than those currently collected by ERTS-1 suggests that some other bands (particularly a thermal band) would be useful in future satellites.

  14. Image processing of metal surface with structured light

    NASA Astrophysics Data System (ADS)

    Luo, Cong; Feng, Chang; Wang, Congzheng

    2014-09-01

    In structured light vision measurement system, the ideal image of structured light strip, in addition to black background , contains only the gray information of the position of the stripe. However, the actual image contains image noise, complex background and so on, which does not belong to the stripe, and it will cause interference to useful information. To extract the stripe center of mental surface accurately, a new processing method was presented. Through adaptive median filtering, the noise can be preliminary removed, and the noise which introduced by CCD camera and measured environment can be further removed with difference image method. To highlight fine details and enhance the blurred regions between the stripe and noise, the sharping algorithm is used which combine the best features of Laplacian operator and Sobel operator. Morphological opening operation and closing operation are used to compensate the loss of information.Experimental results show that this method is effective in the image processing, not only to restrain the information but also heighten contrast. It is beneficial for the following processing.

  15. Infrared spectroscopic imaging for noninvasive detection of latent fingerprints.

    PubMed

    Crane, Nicole J; Bartick, Edward G; Perlman, Rebecca Schwartz; Huffman, Scott

    2007-01-01

    The capability of Fourier transform infrared (FTIR) spectroscopic imaging to provide detailed images of unprocessed latent fingerprints while also preserving important trace evidence is demonstrated. Unprocessed fingerprints were developed on various porous and nonporous substrates. Data-processing methods used to extract the latent fingerprint ridge pattern from the background material included basic infrared spectroscopic band intensities, addition and subtraction of band intensity measurements, principal components analysis (PCA) and calculation of second derivative band intensities, as well as combinations of these various techniques. Additionally, trace evidence within the fingerprints was recovered and identified.

  16. High-throughput imaging of heterogeneous cell organelles with an X-ray laser (CXIDB ID 25)

    DOE Data Explorer

    Hantke, Max, F.

    2014-11-17

    Preprocessed detector images that were used for the paper "High-throughput imaging of heterogeneous cell organelles with an X-ray laser". The CXI file contains the entire recorded data - including both hits and blanks. It also includes down-sampled images and LCLS machine parameters. Additionally, the Cheetah configuration file is attached that was used to create the pre-processed data.

  17. Impacting key performance indicators in an academic MR imaging department through process improvement.

    PubMed

    Recht, Michael; Macari, Michael; Lawson, Kirk; Mulholland, Tom; Chen, David; Kim, Danny; Babb, James

    2013-03-01

    The aim of this study was to evaluate all aspects of workflow in a large academic MRI department to determine whether process improvement (PI) efforts could improve key performance indicators (KPIs). KPI metrics in the investigators' MR imaging department include daily inpatient backlogs, on-time performance for outpatient examinations, examination volumes, appointment backlogs for pediatric anesthesia cases, and scan duration relative to time allotted for an examination. Over a 3-week period in April 2011, key members of the MR imaging department (including technologists, nurses, schedulers, physicians, and administrators) tracked all aspects of patient flow through the department, from scheduling to examination interpretation. Data were analyzed by the group to determine where PI could improve KPIs. Changes to MRI workflow were subsequently implemented, and KPIs were compared before (January 1, 2011, to April 30, 2011) and after (August 1, 2011, to December 31, 2011) using Mann-Whitney and Fisher's exact tests. The data analysis done during this PI led to multiple changes in the daily workflow of the MR department. In addition, a new sense of teamwork and empowerment was established within the MR staff. All of the measured KPIs showed statistically significant changes after the reengineering project. Intradepartmental PI efforts can significantly affect KPI metrics within an MR imaging department, making the process more patient centered. In addition, the process allowed significant growth without the need for additional equipment or personnel. Copyright © 2013 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  18. Critical object recognition in millimeter-wave images with robustness to rotation and scale.

    PubMed

    Mohammadzade, Hoda; Ghojogh, Benyamin; Faezi, Sina; Shabany, Mahdi

    2017-06-01

    Locating critical objects is crucial in various security applications and industries. For example, in security applications, such as in airports, these objects might be hidden or covered under shields or secret sheaths. Millimeter-wave images can be utilized to discover and recognize the critical objects out of the hidden cases without any health risk due to their non-ionizing features. However, millimeter-wave images usually have waves in and around the detected objects, making object recognition difficult. Thus, regular image processing and classification methods cannot be used for these images and additional pre-processings and classification methods should be introduced. This paper proposes a novel pre-processing method for canceling rotation and scale using principal component analysis. In addition, a two-layer classification method is introduced and utilized for recognition. Moreover, a large dataset of millimeter-wave images is collected and created for experiments. Experimental results show that a typical classification method such as support vector machines can recognize 45.5% of a type of critical objects at 34.2% false alarm rate (FAR), which is a drastically poor recognition. The same method within the proposed recognition framework achieves 92.9% recognition rate at 0.43% FAR, which indicates a highly significant improvement. The significant contribution of this work is to introduce a new method for analyzing millimeter-wave images based on machine vision and learning approaches, which is not yet widely noted in the field of millimeter-wave image analysis.

  19. Information theoretic analysis of edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2010-08-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.

  20. Urological diagnosis using clinical PACS

    NASA Astrophysics Data System (ADS)

    Mills, Stephen F.; Spetz, Kevin S.; Dwyer, Samuel J., III

    1995-05-01

    Urological diagnosis using fluoroscopy images has traditionally been performed using radiographic films. Images are generally acquired in conjunction with the application of a contrast agent, processed to create analog films, and inspected to ensure satisfactory image quality prior to being provided to a radiologist for reading. In the case of errors the entire process must be repeated. In addition, the radiologist must then often go to a particular reading room, possibly in a remote part of the healthcare facility, to read the images. The integration of digital fluoroscopy modalities with clinical PACS has the potential to significantly improve the urological diagnosis process by providing high-speed access to images at a variety of locations within a healthcare facility without costly film processing. The PACS additionally provides a cost-effective and reliable means of long-term storage and allows several medical users to simultaneously view the same images at different locations. The installation of a digital data interface between the existing clinically operational PACS at the University of Virginia Health Sciences Center and a digital urology fluoroscope is described. Preliminary user interviews that have been conducted to determine the clinical effectiveness of PACS workstations for urological diagnosis are discussed. The specific suitability of the workstation medium is discussed, as are overall advantages and disadvantages of the hardcopy and softcopy media in terms of efficiency, timeliness and cost. Throughput metrics and some specific parameters of gray-scale viewing stations and the expected system impacts resulting from the integration of a urology fluoroscope with PACS are also discussed.

  1. A methodology for evaluation of an interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Kovalick, William M.; Newcomer, Jeffrey A.; Wharton, Stephen W.

    1987-01-01

    Because of the considerable cost of an interactive multispectral image processing system, an evaluation of a prospective system should be performed to ascertain if it will be acceptable to the anticipated users. Evaluation of a developmental system indicated that the important system elements include documentation, user friendliness, image processing capabilities, and system services. The criteria and evaluation procedures for these elements are described herein. The following factors contributed to the success of the evaluation of the developmental system: (1) careful review of documentation prior to program development, (2) construction and testing of macromodules representing typical processing scenarios, (3) availability of other image processing systems for referral and verification, and (4) use of testing personnel with an applications perspective and experience with other systems. This evaluation was done in addition to and independently of program testing by the software developers of the system.

  2. Onboard spectral imager data processor

    NASA Astrophysics Data System (ADS)

    Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.

    1999-10-01

    Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.

  3. Parallel Processing of Images in Mobile Devices using BOINC

    NASA Astrophysics Data System (ADS)

    Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo

    2018-04-01

    Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  4. Extending Single-Molecule Microscopy Using Optical Fourier Processing

    PubMed Central

    2015-01-01

    This article surveys the recent application of optical Fourier processing to the long-established but still expanding field of single-molecule imaging and microscopy. A variety of single-molecule studies can benefit from the additional image information that can be obtained by modulating the Fourier, or pupil, plane of a widefield microscope. After briefly reviewing several current applications, we present a comprehensive and computationally efficient theoretical model for simulating single-molecule fluorescence as it propagates through an imaging system. Furthermore, we describe how phase/amplitude-modulating optics inserted in the imaging pathway may be modeled, especially at the Fourier plane. Finally, we discuss selected recent applications of Fourier processing methods to measure the orientation, depth, and rotational mobility of single fluorescent molecules. PMID:24745862

  5. Extending single-molecule microscopy using optical Fourier processing.

    PubMed

    Backer, Adam S; Moerner, W E

    2014-07-17

    This article surveys the recent application of optical Fourier processing to the long-established but still expanding field of single-molecule imaging and microscopy. A variety of single-molecule studies can benefit from the additional image information that can be obtained by modulating the Fourier, or pupil, plane of a widefield microscope. After briefly reviewing several current applications, we present a comprehensive and computationally efficient theoretical model for simulating single-molecule fluorescence as it propagates through an imaging system. Furthermore, we describe how phase/amplitude-modulating optics inserted in the imaging pathway may be modeled, especially at the Fourier plane. Finally, we discuss selected recent applications of Fourier processing methods to measure the orientation, depth, and rotational mobility of single fluorescent molecules.

  6. Processing Digital Imagery to Enhance Perceptions of Realism

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur

    2003-01-01

    Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.

  7. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  8. Video Guidance Sensor and Time-of-Flight Rangefinder

    NASA Technical Reports Server (NTRS)

    Bryan, Thomas; Howard, Richard; Bell, Joseph L.; Roe, Fred D.; Book, Michael L.

    2007-01-01

    A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target. As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target. The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images. The additional timing and control signals generated by the FPGA would cause the VGS to alternate between an imaging (direction-finding) mode and a time-of-flight (range-finding mode) and would govern operation in the range-finding mode.

  9. Scatter measurement and correction method for cone-beam CT based on single grating scan

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  10. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  11. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography)

    PubMed Central

    Siegel, Nisan; Storrie, Brian; Bruce, Marc

    2016-01-01

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called “CINCH”. An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution. PMID:26839443

  12. Models of formation and some algorithms of hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

    2014-12-01

    Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

  13. Digital radiography: optimization of image quality and dose using multi-frequency software.

    PubMed

    Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D

    2012-09-01

    New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.

  14. Image processing techniques and applications to the Earth Resources Technology Satellite program

    NASA Technical Reports Server (NTRS)

    Polge, R. J.; Bhagavan, B. K.; Callas, L.

    1973-01-01

    The Earth Resources Technology Satellite system is studied, with emphasis on sensors, data processing requirements, and image data compression using the Fast Fourier and Hadamard transforms. The ERTS-A system and the fundamentals of remote sensing are discussed. Three user applications (forestry, crops, and rangelands) are selected and their spectral signatures are described. It is shown that additional sensors are needed for rangeland management. An on-board information processing system is recommended to reduce the amount of data transmitted.

  15. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collette, R.; King, J.; Keiser, Jr., D.

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  16. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGES

    Collette, R.; King, J.; Keiser, Jr., D.; ...

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  17. Image-Guided Abdominal Surgery and Therapy Delivery

    PubMed Central

    Galloway, Robert L.; Herrell, S. Duke; Miga, Michael I.

    2013-01-01

    Image-Guided Surgery has become the standard of care in intracranial neurosurgery providing more exact resections while minimizing damage to healthy tissue. Moving that process to abdominal organs presents additional challenges in the form of image segmentation, image to physical space registration, organ motion and deformation. In this paper, we present methodologies and results for addressing these challenges in two specific organs: the liver and the kidney. PMID:25077012

  18. Effect of wavefront aberrations on a focused plenoptic imaging system: a wave optics simulation approach

    NASA Astrophysics Data System (ADS)

    Turola, Massimo; Meah, Chris J.; Marshall, Richard J.; Styles, Iain B.; Gruppetta, Stephen

    2015-06-01

    A plenoptic imaging system records simultaneously the intensity and the direction of the rays of light. This additional information allows many post processing features such as 3D imaging, synthetic refocusing and potentially evaluation of wavefront aberrations. In this paper the effects of low order aberrations on a simple plenoptic imaging system have been investigated using a wave optics simulations approach.

  19. Method for growing a back surface contact on an imaging detector used in conjunction with back illumination

    NASA Technical Reports Server (NTRS)

    Blacksberg, Jordana (Inventor); Hoenk, Michael Eugene (Inventor); Nikzad, Shouleh (Inventor)

    2010-01-01

    A method is provided for growing a back surface contact on an imaging detector used in conjunction with back illumination. In operation, an imaging detector is provided. Additionally, a back surface contact (e.g. a delta-doped layer, etc.) is grown on the imaging detector utilizing a process that is performed at a temperature less than 450 degrees Celsius.

  20. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    NASA Astrophysics Data System (ADS)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  1. Astronomy in the Cloud: Using MapReduce for Image Co-Addition

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-03-01

    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.

  2. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  3. The Socio-Moral Image Database (SMID): A novel stimulus set for the study of social, moral and affective processes

    PubMed Central

    Bode, Stefan; Murawski, Carsten; Laham, Simon M.

    2018-01-01

    A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/. PMID:29364985

  4. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients.

    PubMed

    Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong

    2013-01-07

    Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.

  5. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients

    PubMed Central

    Jin, Shuo; Li, Dengwang; Yin, Yong

    2013-01-01

    Accurate registration of  18F−FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from  18F−FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information‐based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application. PACS numbers: 87.57.nj, 87.57.Q‐, 87.57.uk PMID:23318381

  6. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment.

    PubMed

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  7. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  8. Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing.

    PubMed

    Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina

    2016-12-01

    Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.

  9. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  10. Green's function and image system for the Laplace operator in the prolate spheroidal geometry

    NASA Astrophysics Data System (ADS)

    Xue, Changfeng; Deng, Shaozhong

    2017-01-01

    In the present paper, electrostatic image theory is studied for Green's function for the Laplace operator in the case where the fundamental domain is either the exterior or the interior of a prolate spheroid. In either case, an image system is developed to consist of a point image inside the complement of the fundamental domain and an additional symmetric continuous surface image over a confocal prolate spheroid outside the fundamental domain, although the process of calculating such an image system is easier for the exterior than for the interior Green's function. The total charge of the surface image is zero and its centroid is at the origin of the prolate spheroid. In addition, if the source is on the focal axis outside the prolate spheroid, then the image system of the exterior Green's function consists of a point image on the focal axis and a line image on the line segment between the two focal points.

  11. Medical image integrity control and forensics based on watermarking--approximating local modifications and identifying global image alterations.

    PubMed

    Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch

    2011-01-01

    In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.

  12. MicroCT parameters for multimaterial elements assessment

    NASA Astrophysics Data System (ADS)

    de Araújo, Olga M. O.; Silva Bastos, Jaqueline; Machado, Alessandra S.; dos Santos, Thaís M. P.; Ferreira, Cintia G.; Rosifini Alves Claro, Ana Paula; Lopes, Ricardo T.

    2018-03-01

    Microtomography is a non-destructive testing technique for quantitative and qualitative analysis. The investigation of multimaterial elements with great difference of density can result in artifacts that degrade image quality depending on combination of additional filter. The aim of this study is the selection of parameters most appropriate for analysis of bone tissue with metallic implant. The results show the simulation with MCNPX code for the distribution of energy without additional filter, with use of aluminum, copper and brass filters and their respective reconstructed images showing the importance of the choice of these parameters in image acquisition process on computed microtomography.

  13. Reaching back: the relative strength of the retroactive emotional attentional blink

    PubMed Central

    Ní Choisdealbha, Áine; Piech, Richard M.; Fuller, John K.; Zald, David H.

    2017-01-01

    Visual stimuli with emotional content appearing in close temporal proximity either before or after a target stimulus can hinder conscious perceptual processing of the target via an emotional attentional blink (EAB). This occurs for targets that appear after the emotional stimulus (forward EAB) and for those appearing before the emotional stimulus (retroactive EAB). Additionally, the traditional attentional blink (AB) occurs because detection of any target hinders detection of a subsequent target. The present study investigated the relations between these different attentional processes. Rapid sequences of landscape images were presented to thirty-one male participants with occasional landscape targets (rotated images). For the forward EAB, emotional or neutral distractor images of people were presented before the target; for the retroactive EAB, such images were also targets and presented after the landscape target. In the latter case, this design allowed investigation of the AB as well. Erotic and gory images caused more EABs than neutral images, but there were no differential effects on the AB. This pattern is striking because while using different target categories (rotated landscapes, people) appears to have eliminated the AB, the retroactive EAB still occurred, offering additional evidence for the power of emotional stimuli over conscious attention. PMID:28255172

  14. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  15. Single-random-phase holographic encryption of images

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.

    2017-02-01

    In this paper, a method is proposed for encrypting an optical image onto a phase-only hologram, utilizing a single random phase mask as the private encryption key. The encryption process can be divided into 3 stages. First the source image to be encrypted is scaled in size, and pasted onto an arbitrary position in a larger global image. The remaining areas of the global image that are not occupied by the source image could be filled with randomly generated contents. As such, the global image as a whole is very different from the source image, but at the same time the visual quality of the source image is preserved. Second, a digital Fresnel hologram is generated from the new image, and converted into a phase-only hologram based on bi-directional error diffusion. In the final stage, a fixed random phase mask is added to the phase-only hologram as the private encryption key. In the decryption process, the global image together with the source image it contained, can be reconstructed from the phase-only hologram if it is overlaid with the correct decryption key. The proposed method is highly resistant to different forms of Plain-Text-Attacks, which are commonly used to deduce the encryption key in existing holographic encryption process. In addition, both the encryption and the decryption processes are simple and easy to implement.

  16. A review of breast tomosynthesis. Part II. Image reconstruction, processing and analysis, and advanced applications

    PubMed Central

    Sechopoulos, Ioannis

    2013-01-01

    Many important post-acquisition aspects of breast tomosynthesis imaging can impact its clinical performance. Chief among them is the reconstruction algorithm that generates the representation of the three-dimensional breast volume from the acquired projections. But even after reconstruction, additional processes, such as artifact reduction algorithms, computer aided detection and diagnosis, among others, can also impact the performance of breast tomosynthesis in the clinical realm. In this two part paper, a review of breast tomosynthesis research is performed, with an emphasis on its medical physics aspects. In the companion paper, the first part of this review, the research performed relevant to the image acquisition process is examined. This second part will review the research on the post-acquisition aspects, including reconstruction, image processing, and analysis, as well as the advanced applications being investigated for breast tomosynthesis. PMID:23298127

  17. An optical processor for object recognition and tracking

    NASA Technical Reports Server (NTRS)

    Sloan, J.; Udomkesmalee, S.

    1987-01-01

    The design and development of a miniaturized optical processor that performs real time image correlation are described. The optical correlator utilizes the Vander Lugt matched spatial filter technique. The correlation output, a focused beam of light, is imaged onto a CMOS photodetector array. In addition to performing target recognition, the device also tracks the target. The hardware, composed of optical and electro-optical components, occupies only 590 cu cm of volume. A complete correlator system would also include an input imaging lens. This optical processing system is compact, rugged, requires only 3.5 watts of operating power, and weighs less than 3 kg. It represents a major achievement in miniaturizing optical processors. When considered as a special-purpose processing unit, it is an attractive alternative to conventional digital image recognition processing. It is conceivable that the combined technology of both optical and ditital processing could result in a very advanced robot vision system.

  18. In vivo assessment of the structure of skin microcirculation by reflectance confocal-laser-scanning microscopy

    NASA Astrophysics Data System (ADS)

    Sugata, Keiichi; Osanai, Osamu; Kawada, Hiromitsu

    2012-02-01

    One of the major roles of the skin microcirculation is to supply oxygen and nutrition to the surrounding tissue. Regardless of the close relationship between the microcirculation and the surrounding tissue, there are few non-invasive methods that can evaluate both the microcirculation and its surrounding tissue at the same site. We visualized microcapillary plexus structures in human skin using in vivo reflectance confocal-laser-scanning microscopy (CLSM), Vivascope 3000® (Lucid Inc., USA) and Image J software (National Institutes of Health, USA) for video image processing. CLSM is a non-invasive technique that can visualize the internal structure of the skin at the cellular level. In addition to internal morphological information such as the extracellular matrix, our method reveals capillary structures up to the depth of the subpapillary plexus at the same site without the need for additional optical systems. Video images at specific depths of the inner forearm skin were recorded. By creating frame-to-frame difference images from the video images using off-line video image processing, we obtained images that emphasize the brightness depending on changes of intensity coming from the movement of blood cells. Merging images from different depths of the skin elucidates the 3-dimensional fine line-structure of the microcirculation. Overall our results show the feasibility of a non-invasive, high-resolution imaging technique to characterize the skin microcirculation and the surrounding tissue.

  19. LANDSAT-4 image data quality analysis for energy related applications. [nuclear power plant sites

    NASA Technical Reports Server (NTRS)

    Wukelic, G. E. (Principal Investigator)

    1983-01-01

    No useable LANDSAT 4 TM data were obtained for the Hanford site in the Columbia Plateau region, but TM simulator data for a Virginia Electric Company nuclear power plant was used to test image processing algorithms. Principal component analyses of this data set clearly indicated that thermal plumes in surface waters used for reactor cooling would be discrenible. Image processing and analysis programs were successfully testing using the 7 band Arkansas test scene and preliminary analysis of TM data for the Savanah River Plant shows that current interactive, image enhancement, analysis and integration techniques can be effectively used for LANDSAT 4 data. Thermal band data appear adequate for gross estimates of thermal changes occurring near operating nuclear facilities especially in surface water bodies being used for reactor cooling purposes. Additional image processing software was written and tested which provides for more rapid and effective analysis of the 7 band TM data.

  20. Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images

    PubMed Central

    Zhou, Mingyuan; Chen, Haojun; Paisley, John; Ren, Lu; Li, Lingbo; Xing, Zhengming; Dunson, David; Sapiro, Guillermo; Carin, Lawrence

    2013-01-01

    Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. PMID:21693421

  1. Color sensitivity of the multi-exposure HDR imaging process

    NASA Astrophysics Data System (ADS)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  2. SKL algorithm based fabric image matching and retrieval

    NASA Astrophysics Data System (ADS)

    Cao, Yichen; Zhang, Xueqin; Ma, Guojian; Sun, Rongqing; Dong, Deping

    2017-07-01

    Intelligent computer image processing technology provides convenience and possibility for designers to carry out designs. Shape analysis can be achieved by extracting SURF feature. However, high dimension of SURF feature causes to lower matching speed. To solve this problem, this paper proposed a fast fabric image matching algorithm based on SURF K-means and LSH algorithm. By constructing the bag of visual words on K-Means algorithm, and forming feature histogram of each image, the dimension of SURF feature is reduced at the first step. Then with the help of LSH algorithm, the features are encoded and the dimension is further reduced. In addition, the indexes of each image and each class of image are created, and the number of matching images is decreased by LSH hash bucket. Experiments on fabric image database show that this algorithm can speed up the matching and retrieval process, the result can satisfy the requirement of dress designers with accuracy and speed.

  3. Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method

    PubMed Central

    Lu, Zhaolin

    2017-01-01

    Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis. PMID:28298925

  4. Photographic techniques for enhancing ERTS MSS data for geologic information

    NASA Technical Reports Server (NTRS)

    Yost, E.; Geluso, W.; Anderson, R.

    1974-01-01

    Satellite multispectral black-and-white photographic negatives of Luna County, New Mexico, obtained by ERTS on 15 August and 2 September 1973, were precisely reprocessed into positive images and analyzed in an additive color viewer. In addition, an isoluminous (uniform brightness) color rendition of the image was constructed. The isoluminous technique emphasizes subtle differences between multispectral bands by greatly enhancing the color of the superimposed composite of all bands and eliminating the effects of brightness caused by sloping terrain. Basaltic lava flows were more accurately displayed in the precision processed multispectral additive color ERTS renditions than on existing state geological maps. Malpais lava flows and small basaltic occurrences not appearing on existing geological maps were identified in ERTS multispectral color images.

  5. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  6. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  7. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  8. ISLE (Image and Signal Lisp Environment): A functional language interface for signal and image processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azevedo, S.G.; Fitch, J.P.

    1987-05-01

    Conventional software interfaces which utilize imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal processing software (SIG). Existing ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal Lisp Environment (ISLE) will be discussed as an example of an interpreted functional language interface based on Common LISP. Additional benefits include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligencemore » software.« less

  9. Reducing uncertainty in wind turbine blade health inspection with image processing techniques

    NASA Astrophysics Data System (ADS)

    Zhang, Huiyi

    Structural health inspection has been widely applied in the operation of wind farms to find early cracks in wind turbine blades (WTBs). Increased numbers of turbines and expanded rotor diameters are driving up the workloads and safety risks for site employees. Therefore, it is important to automate the inspection process as well as minimize the uncertainties involved in routine blade health inspection. In addition, crack documentation and trending is vital to assess rotor blade and turbine reliability in the 20 year designed life span. A new crack recognition and classification algorithm is described that can support automated structural health inspection of the surface of large composite WTBs. The first part of the study investigated the feasibility of digital image processing in WTB health inspection and defined the capability of numerically detecting cracks as small as hairline thickness. The second part of the study identified and analyzed the uncertainty of the digital image processing method. A self-learning algorithm was proposed to recognize and classify cracks without comparing a blade image to a library of crack images. The last part of the research quantified the uncertainty in the field conditions and the image processing methods.

  10. Parallel Guessing: A Strategy for High-Speed Computation

    DTIC Science & Technology

    1984-09-19

    for using additional hardware to obtain higher processing speed). In this paper we argue that parallel guessing for image analysis is a useful...from a true solution, or the correctness of a guess, can be readily checked. We review image - analysis algorithms having a parallel guessing or

  11. Preliminary Impact Crater Dimensions on 433 Eros from the NEAR Laser Rangefinder and Imager

    NASA Technical Reports Server (NTRS)

    Barnouin-Jha, O. S.; Garvin, J. B.; Cheng, A. F.; Zuber, M.; Smith, D.; Neumann, G.; Murchie, S.; Veverka, J.; Robinson, M.

    2001-01-01

    We report preliminary observations obtained from the NEAR Laser Rangefinder (NLR) and NEAR Multispectral Imager (MSI) for approx. 300 craters seen on 433 Eros to address Eros crater formation and degradation processes. Additional information is contained in the original extended abstract.

  12. IfA Catalogs of Solar Data Products

    NASA Astrophysics Data System (ADS)

    Habbal, Shadia R.; Scholl, I.; Morgan, H.

    2009-05-01

    This paper presents a new set of online catalogs of solar data products. The IfA Catalogs of Solar Data Products were developed to enhance the scientific output of coronal images acquired from ground and space, starting with the SoHO era. Image processing tools have played a significant role in the production of these catalogs [Morgan et al. 2006, 2008, Scholl and Habbal 2008]. Two catalogs are currently available at http://alshamess.ifa.hawaii.edu/ : 1) Catalog of daily coronal images: One coronal image per day from EIT, MLSO and LASCO/C2 and C3 have been processed using the Normalizing Radial-Graded-Filter (NRGF) image processing tool. These images are available individually or as composite images. 2) Catalog of LASCO data: The whole LASCO dataset has been re-processed using the same method. The user can search files by dates and instruments, and images can be retrieved as JPEG or FITS files. An option to make on-line GIF movies from selected images is also available. In addition, the LASCO data set can be searched from existing CME catalogs (CDAW and Cactus). By browsing one of the two CME catalogs, the user can refine the query and access LASCO data covering the time frame of a CME. The catalogs will be continually updated as more data become publicly available.

  13. Deep learning methods to guide CT image reconstruction and reduce metal artifacts

    NASA Astrophysics Data System (ADS)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge

    2017-03-01

    The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.

  14. Deep learning model-based algorithm for SAR ATR

    NASA Astrophysics Data System (ADS)

    Friedlander, Robert D.; Levy, Michael; Sudkamp, Elizabeth; Zelnio, Edmund

    2018-05-01

    Many computer-vision-related problems have successfully applied deep learning to improve the error rates with respect to classifying images. As opposed to optically based images, we have applied deep learning via a Siamese Neural Network (SNN) to classify synthetic aperture radar (SAR) images. This application of Automatic Target Recognition (ATR) utilizes an SNN made up of twin AlexNet-based Convolutional Neural Networks (CNNs). Using the processing power of GPUs, we trained the SNN with combinations of synthetic images on one twin and Moving and Stationary Target Automatic Recognition (MSTAR) measured images on a second twin. We trained the SNN with three target types (T-72, BMP2, and BTR-70) and have used a representative, synthetic model from each target to classify new SAR images. Even with a relatively small quantity of data (with respect to machine learning), we found that the SNN performed comparably to a CNN and had faster convergence. The results of processing showed the T-72s to be the easiest to identify, whereas the network sometimes mixed up the BMP2s and the BTR-70s. In addition we also incorporated two additional targets (M1 and M35) into the validation set. Without as much training (for example, one additional epoch) the SNN did not produce the same results as if all five targets had been trained over all the epochs. Nevertheless, an SNN represents a novel and beneficial approach to SAR ATR.

  15. Advances of Molecular Imaging for Monitoring the Anatomical and Functional Architecture of the Olfactory System.

    PubMed

    Zhang, Xintong; Bi, Anyao; Gao, Quansheng; Zhang, Shuai; Huang, Kunzhu; Liu, Zhiguo; Gao, Tang; Zeng, Wenbin

    2016-01-20

    The olfactory system of organisms serves as a genetically and anatomically model for studying how sensory input can be translated into behavior output. Some neurologic diseases are considered to be related to olfactory disturbance, especially Alzheimer's disease, Parkinson's disease, multiple sclerosis, and so forth. However, it is still unclear how the olfactory system affects disease generation processes and olfaction delivery processes. Molecular imaging, a modern multidisciplinary technology, can provide valid tools for the early detection and characterization of diseases, evaluation of treatment, and study of biological processes in living subjects, since molecular imaging applies specific molecular probes as a novel approach to produce special data to study biological processes in cellular and subcellular levels. Recently, molecular imaging plays a key role in studying the activation of olfactory system, thus it could help to prevent or delay some diseases. Herein, we present a comprehensive review on the research progress of the imaging probes for visualizing olfactory system, which is classified on different imaging modalities, including PET, MRI, and optical imaging. Additionally, the probes' design, sensing mechanism, and biological application are discussed. Finally, we provide an outlook for future studies in this field.

  16. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    PubMed

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  17. Efficient fuzzy C-means architecture for image segmentation.

    PubMed

    Li, Hui-Ya; Hwang, Wen-Jyi; Chang, Chia-Yen

    2011-01-01

    This paper presents a novel VLSI architecture for image segmentation. The architecture is based on the fuzzy c-means algorithm with spatial constraint for reducing the misclassification rate. In the architecture, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. In addition, an efficient pipelined circuit is used for the updating process for accelerating the computational speed. Experimental results show that the the proposed circuit is an effective alternative for real-time image segmentation with low area cost and low misclassification rate.

  18. Acoustical holographic recording with coherent optical read-out and image processing

    NASA Astrophysics Data System (ADS)

    Liu, H. K.

    1980-10-01

    New acoustic holographic wave memory devices have been designed for real-time in-situ recording applications. The basic operating principles of these devices and experimental results through the use of some of the prototypes of the devices are presented. Recording media used in the device include thermoplastic resin, Crisco vegetable oil, and Wilson corn oil. In addition, nonlinear coherent optical image processing techniques including equidensitometry, A-D conversion, and pseudo-color, all based on the new contact screen technique, are discussed with regard to the enhancement of the normally poor-resolved acoustical holographic images.

  19. Plant features measurements for robotics

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1989-01-01

    Initial studies of the technical feasibility of using machine vision and color image processing to measure plant health were performed. Wheat plants were grown in nutrient solutions deficient in nitrogen, potassium, and iron. An additional treatment imposed water stress on wheat plants which received a full complement of nutrients. The results for juvenile (less than 2 weeks old) wheat plants show that imaging technology can be used to detect nutrient deficiencies. The relative amount of green color in a leaf declined with increased water stress. The absolute amount of green was higher for nitrogen deficient leaves compared to the control plants. Relative greenness was lower for iron deficient leaves, but the absolute green values were higher. The data showed patterns across the leaf consistent with visual symptons. The development of additional color image processing routines to recognize these patterns would improve the performance of this sensor of plant health.

  20. Imaging angiogenesis.

    PubMed

    Charnley, Natalie; Donaldson, Stephanie; Price, Pat

    2009-01-01

    There is a need for direct imaging of effects on tumor vasculature in assessment of response to antiangiogenic drugs and vascular disrupting agents. Imaging tumor vasculature depends on differences in permeability of vasculature of tumor and normal tissue, which cause changes in penetration of contrast agents. Angiogenesis imaging may be defined in terms of measurement of tumor perfusion and direct imaging of the molecules involved in angiogenesis. In addition, assessment of tumor hypoxia will give an indication of tumor vasculature. The range of imaging techniques available for these processes includes positron emission tomography (PET), dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), perfusion computed tomography (CT), and ultrasound (US).

  1. Development of a low cost high precision three-layer 3D artificial compound eye.

    PubMed

    Zhang, Hao; Li, Lei; McCray, David L; Scheiding, Sebastian; Naples, Neil J; Gebhardt, Andreas; Risse, Stefan; Eberhardt, Ramona; Tünnermann, Andreas; Yi, Allen Y

    2013-09-23

    Artificial compound eyes are typically designed on planar substrates due to the limits of current imaging devices and available manufacturing processes. In this study, a high precision, low cost, three-layer 3D artificial compound eye consisting of a 3D microlens array, a freeform lens array, and a field lens array was constructed to mimic an apposition compound eye on a curved substrate. The freeform microlens array was manufactured on a curved substrate to alter incident light beams and steer their respective images onto a flat image plane. The optical design was performed using ZEMAX. The optical simulation shows that the artificial compound eye can form multiple images with aberrations below 11 μm; adequate for many imaging applications. Both the freeform lens array and the field lens array were manufactured using microinjection molding process to reduce cost. Aluminum mold inserts were diamond machined by the slow tool servo method. The performance of the compound eye was tested using a home-built optical setup. The images captured demonstrate that the proposed structures can successfully steer images from a curved surface onto a planar photoreceptor. Experimental results show that the compound eye in this research has a field of view of 87°. In addition, images formed by multiple channels were found to be evenly distributed on the flat photoreceptor. Additionally, overlapping views of the adjacent channels allow higher resolution images to be re-constructed from multiple 3D images taken simultaneously.

  2. An automated dose tracking system for adaptive radiation therapy.

    PubMed

    Liu, Chang; Kim, Jinkoo; Kumarasiri, Akila; Mayyas, Essa; Brown, Stephen L; Wen, Ning; Siddiqui, Farzan; Chetty, Indrin J

    2018-02-01

    The implementation of adaptive radiation therapy (ART) into routine clinical practice is technically challenging and requires significant resources to perform and validate each process step. The objective of this report is to identify the key components of ART, to illustrate how a specific automated procedure improves efficiency, and to facilitate the routine clinical application of ART. Data was used from patient images, exported from a clinical database and converted to an intermediate format for point-wise dose tracking and accumulation. The process was automated using in-house developed software containing three modularized components: an ART engine, user interactive tools, and integration tools. The ART engine conducts computing tasks using the following modules: data importing, image pre-processing, dose mapping, dose accumulation, and reporting. In addition, custom graphical user interfaces (GUIs) were developed to allow user interaction with select processes such as deformable image registration (DIR). A commercial scripting application programming interface was used to incorporate automated dose calculation for application in routine treatment planning. Each module was considered an independent program, written in C++or C#, running in a distributed Windows environment, scheduled and monitored by integration tools. The automated tracking system was retrospectively evaluated for 20 patients with prostate cancer and 96 patients with head and neck cancer, under institutional review board (IRB) approval. In addition, the system was evaluated prospectively using 4 patients with head and neck cancer. Altogether 780 prostate dose fractions and 2586 head and neck cancer dose fractions went processed, including DIR and dose mapping. On average, daily cumulative dose was computed in 3 h and the manual work was limited to 13 min per case with approximately 10% of cases requiring an additional 10 min for image registration refinement. An efficient and convenient dose tracking system for ART in the clinical setting is presented. The software and automated processes were rigorously evaluated and validated using patient image datasets. Automation of the various procedures has improved efficiency significantly, allowing for the routine clinical application of ART for improving radiation therapy effectiveness. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Space Radar Image of Kilauea, Hawaii - Interferometry 1

    NASA Image and Video Library

    1999-05-01

    This X-band image of the volcano Kilauea was taken on October 4, 1994, by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar. The area shown is about 9 kilometers by 13 kilometers (5.5 miles by 8 miles) and is centered at about 19.58 degrees north latitude and 155.55 degrees west longitude. This image and a similar image taken during the first flight of the radar instrument on April 13, 1994 were combined to produce the topographic information by means of an interferometric process. This is a process by which radar data acquired on different passes of the space shuttle is overlaid to obtain elevation information. Three additional images are provided showing an overlay of radar data with interferometric fringes; a three-dimensional image based on altitude lines; and, finally, a topographic view of the region. http://photojournal.jpl.nasa.gov/catalog/PIA01763

  4. Spectral imaging applications: Remote sensing, environmental monitoring, medicine, military operations, factory automation and manufacturing

    NASA Technical Reports Server (NTRS)

    Gat, N.; Subramanian, S.; Barhen, J.; Toomarian, N.

    1996-01-01

    This paper reviews the activities at OKSI related to imaging spectroscopy presenting current and future applications of the technology. The authors discuss the development of several systems including hardware, signal processing, data classification algorithms and benchmarking techniques to determine algorithm performance. Signal processing for each application is tailored by incorporating the phenomenology appropriate to the process, into the algorithms. Pixel signatures are classified using techniques such as principal component analyses, generalized eigenvalue analysis and novel very fast neural network methods. The major hyperspectral imaging systems developed at OKSI include the Intelligent Missile Seeker (IMS) demonstration project for real-time target/decoy discrimination, and the Thermal InfraRed Imaging Spectrometer (TIRIS) for detection and tracking of toxic plumes and gases. In addition, systems for applications in medical photodiagnosis, manufacturing technology, and for crop monitoring are also under development.

  5. In situ process monitoring in selective laser sintering using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Gardner, Michael R.; Lewis, Adam; Park, Jongwan; McElroy, Austin B.; Estrada, Arnold D.; Fish, Scott; Beaman, Joseph J.; Milner, Thomas E.

    2018-04-01

    Selective laser sintering (SLS) is an efficient process in additive manufacturing that enables rapid part production from computer-based designs. However, SLS is limited by its notable lack of in situ process monitoring when compared with other manufacturing processes. We report the incorporation of optical coherence tomography (OCT) into an SLS system in detail and demonstrate access to surface and subsurface features. Video frame rate cross-sectional imaging reveals areas of sintering uniformity and areas of excessive heat error with high temporal resolution. We propose a set of image processing techniques for SLS process monitoring with OCT and report the limitations and obstacles for further OCT integration with SLS systems.

  6. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  7. Colour application on mammography image segmentation

    NASA Astrophysics Data System (ADS)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  8. Engineering workstation: Sensor modeling

    NASA Technical Reports Server (NTRS)

    Pavel, M; Sweet, B.

    1993-01-01

    The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.

  9. Mars Orbiter Camera Views the 'Face on Mars' - Comparison with Viking

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.

    The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long.

    In this comparison, the best Viking image has been enlarged to 3.3 times its original resolution, and the MOC image has been decreased by a similar 3.3 times, creating images of roughly the same size. In addition, the MOC images have been geometrically transformed to a more overhead projection (different from the mercator map projection of PIA01440 & 1441) for ease of comparison with the Viking image. The left image is a portion of Viking Orbiter 1 frame 070A13, the middle image is a portion of MOC frame shown normally, and the right image is the same MOC frame but with the brightness inverted to simulate the approximate lighting conditions of the Viking image.

    Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps:

    The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking.

    The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales.

    The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth.

    A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.

    See PIA01440-1442 for additional processing steps. Also see PIA01236 for the raw image.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  10. High Resolution Near Real Time Image Processing and Support for MSSS Modernization

    NASA Astrophysics Data System (ADS)

    Duncan, R. B.; Sabol, C.; Borelli, K.; Spetka, S.; Addison, J.; Mallo, A.; Farnsworth, B.; Viloria, R.

    2012-09-01

    This paper describes image enhancement software applications engineering development work that has been performed in support of Maui Space Surveillance System (MSSS) Modernization. It also includes R&D and transition activity that has been performed over the past few years with the objective of providing increased space situational awareness (SSA) capabilities. This includes Air Force Research Laboratory (AFRL) use of an FY10 Dedicated High Performance Investment (DHPI) cluster award -- and our selection and planned use for an FY12 DHPI award. We provide an introduction to image processing of electro optical (EO) telescope sensors data; and a high resolution image enhancement and near real time processing and summary status overview. We then describe recent image enhancement applications development and support for MSSS Modernization, results to date, and end with a discussion of desired future development work and conclusions. Significant improvements to image processing enhancement have been realized over the past several years, including a key application that has realized more than a 10,000-times speedup compared to the original R&D code -- and a greater than 72-times speedup over the past few years. The latest version of this code maintains software efficiency for post-mission processing while providing optimization for image processing of data from a new EO sensor at MSSS. Additional work has also been performed to develop low latency, near real time processing of data that is collected by the ground-based sensor during overhead passes of space objects.

  11. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery

    NASA Astrophysics Data System (ADS)

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  12. Analysis of the correlation between plasma plume and keyhole behavior in laser metal welding for the modeling of the keyhole geometry

    NASA Astrophysics Data System (ADS)

    Tenner, F.; Brock, C.; Klämpfl, F.; Schmidt, M.

    2015-01-01

    The process of laser metal welding is widely used in industry. Nevertheless, there is still a lack of complete process understanding and control. For analyzing the process we used two high-speed cameras. Therefore, we could image the plasma plume (which is directly accessible by a camera) and the keyhole (where most of the process instabilities occur) during laser welding isochronously. Applying different image processing steps we were able to find a correlation between those two process characteristics. Additionally we imaged the plasma plume from two directions and were able to calculate a volume with respect to the vaporized material the plasma plume carries. Due to these correlations we are able to conclude the keyhole stability from imaging the plasma plume and vice versa. We used the found correlation between the keyhole behavior and the plasma plume to explain the effect of changing laser power and feed rate on the keyhole geometry. Furthermore, we tried to outline the phenomena which have the biggest effect on the keyhole geometry during changes of feed rate and laser power.

  13. Comparison of turbulence mitigation algorithms

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  14. Ultrasonic Shear Wave Elasticity Imaging (SWEI) Sequencing and Data Processing Using a Verasonics Research Scanner

    PubMed Central

    Deng, Yufeng; Rouze, Ned C.; Palmeri, Mark L.; Nightingale, Kathryn R.

    2017-01-01

    Ultrasound elasticity imaging has been developed over the last decade to estimate tissue stiffness. Shear wave elasticity imaging (SWEI) quantifies tissue stiffness by measuring the speed of propagating shear waves following acoustic radiation force excitation. This work presents the sequencing and data processing protocols of SWEI using a Verasonics system. The selection of the sequence parameters in a Verasonics programming script is discussed in detail. The data processing pipeline to calculate group shear wave speed (SWS), including tissue motion estimation, data filtering, and SWS estimation is demonstrated. In addition, the procedures for calibration of beam position, scanner timing, and transducer face heating are provided to avoid SWS measurement bias and transducer damage. PMID:28092508

  15. Quantitative optical diagnostics in pathology recognition and monitoring of tissue reaction to PDT

    NASA Astrophysics Data System (ADS)

    Kirillin, Mikhail; Shakhova, Maria; Meller, Alina; Sapunov, Dmitry; Agrba, Pavel; Khilov, Alexander; Pasukhin, Mikhail; Kondratieva, Olga; Chikalova, Ksenia; Motovilova, Tatiana; Sergeeva, Ekaterina; Turchin, Ilya; Shakhova, Natalia

    2017-07-01

    Optical coherence tomography (OCT) is currently actively introduced into clinical practice. Besides diagnostics, it can be efficiently employed for treatment monitoring allowing for timely correction of the treatment procedure. In monitoring of photodynamic therapy (PDT) traditionally employed fluorescence imaging (FI) can benefit from complementary use of OCT. Additional diagnostic efficiency can be derived from numerical processing of optical diagnostics data providing more information compared to visual evaluation. In this paper we report on application of OCT together with numerical processing for clinical diagnostic in gynecology and otolaryngology, for monitoring of PDT in otolaryngology and on OCT and FI applications in clinical and aesthetic dermatology. Image numerical processing and quantification provides increase in diagnostic accuracy. Keywords: optical coherence tomography, fluorescence imaging, photod

  16. Northeast Artificial Intelligence Consortium Annual Report for 1987. Volume 4. Research in Automated Photointerpretation

    DTIC Science & Technology

    1989-03-01

    KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection

  17. Study on polarization image methods in turbid medium

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong

    2014-11-01

    Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.

  18. Digital radiography: spatial and contrast resolution

    NASA Astrophysics Data System (ADS)

    Bjorkholm, Paul; Annis, M.; Frederick, E.; Stein, J.; Swift, R.

    1981-07-01

    The addition of digital image collection and storage to standard and newly developed x-ray imaging techniques has allowed spectacular improvements in some diagnostic procedures. There is no reason to expect that the developments in this area are yet complete. But no matter what further developments occur in this field, all the techniques will share a common element, digital image storage and processing. This common element alone determines some of the important imaging characteristics. These will be discussed using one system, the Medical MICRODOSE System as an example.

  19. Image processing operations achievable with the Microchannel Spatial Light Modulator

    NASA Astrophysics Data System (ADS)

    Warde, C.; Fisher, A. D.; Thackara, J. I.; Weiss, A. M.

    1980-01-01

    The Microchannel Spatial Light Modulator (MSLM) is a versatile, optically-addressed, highly-sensitive device that is well suited for low-light-level, real-time, optical information processing. It consists of a photocathode, a microchannel plate (MCP), a planar acceleration grid, and an electro-optic plate in proximity focus. A framing rate of 20 Hz with full modulation depth, and 100 Hz with 20% modulation depth has been achieved in a vacuum-demountable LiTaO3 device. A halfwave exposure sensitivity of 2.2 mJ/sq cm and an optical information storage time of more than 2 months have been achieved in a similar gridless LiTaO3 device employing a visible photocathode. Image processing operations such as analog and digital thresholding, real-time image hard clipping, contrast reversal, contrast enhancement, image addition and subtraction, and binary-level logic operations such as AND, OR, XOR, and NOR can be achieved with this device. This collection of achievable image processing characteristics makes the MSLM potentially useful for a number of smart sensor applications.

  20. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  1. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  2. Gamma activity modulated by naming of ambiguous and unambiguous images: intracranial recording

    PubMed Central

    Cho-Hisamoto, Yoshimi; Kojima, Katsuaki; Brown, Erik C; Matsuzaki, Naoyuki; Asano, Eishi

    2014-01-01

    OBJECTIVE Humans sometimes need to recognize objects based on vague and ambiguous silhouettes. Recognition of such images may require an intuitive guess. We determined the spatial-temporal characteristics of intracranially-recorded gamma activity (at 50–120 Hz) augmented differentially by naming of ambiguous and unambiguous images. METHODS We studied ten patients who underwent epilepsy surgery. Ambiguous and unambiguous images were presented during extraoperative electrocorticography recording, and patients were instructed to overtly name the object as it is first perceived. RESULTS Both naming tasks were commonly associated with gamma-augmentation sequentially involving the occipital and occipital-temporal regions, bilaterally, within 200 ms after the onset of image presentation. Naming of ambiguous images elicited gamma-augmentation specifically involving portions of the inferior-frontal, orbitofrontal, and inferior-parietal regions at 400 ms and after. Unambiguous images were associated with more intense gamma-augmentation in portions of the occipital and occipital-temporal regions. CONCLUSIONS Frontal-parietal gamma-augmentation specific to ambiguous images may reflect the additional cortical processing involved in exerting intuitive guess. Occipital gamma-augmentation enhanced during naming of unambiguous images can be explained by visual processing of stimuli with richer detail. SIGNIFICANCE Our results support the theoretical model that guessing processes in visual domain occur following the accumulation of sensory evidence resulting from the bottom-up processing in the occipital-temporal visual pathways. PMID:24815577

  3. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications

    PubMed Central

    Park, Keunyeol; Song, Minkyu

    2018-01-01

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273

  4. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.

    PubMed

    Park, Keunyeol; Song, Minkyu; Kim, Soo Youn

    2018-02-24

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.

  5. Bayesian nonparametric dictionary learning for compressed sensing MRI.

    PubMed

    Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping

    2014-12-01

    We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.

  6. Measurement of smaller colon polyp in CT colonography images using morphological image processing.

    PubMed

    Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K

    2017-11-01

    Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even <5 mm were also detected. The results were validated qualitatively and quantitatively using both 2D MPR and 3D view. Implementation was done on a high-performance computer with parallel processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].

  7. SkySat-1: very high-resolution imagery from a small satellite

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk

    2014-10-01

    This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.

  8. Visual grading analysis of digital neonatal chest phantom X-ray images: Impact of detector type, dose and image processing on image quality.

    PubMed

    Smet, M H; Breysem, L; Mussen, E; Bosmans, H; Marshall, N W; Cockmartin, L

    2018-07-01

    To evaluate the impact of digital detector, dose level and post-processing on neonatal chest phantom X-ray image quality (IQ). A neonatal phantom was imaged using four different detectors: a CR powder phosphor (PIP), a CR needle phosphor (NIP) and two wireless CsI DR detectors (DXD and DRX). Five different dose levels were studied for each detector and two post-processing algorithms evaluated for each vendor. Three paediatric radiologists scored the images using European quality criteria plus additional questions on vascular lines, noise and disease simulation. Visual grading characteristics and ordinal regression statistics were used to evaluate the effect of detector type, post-processing and dose on VGA score (VGAS). No significant differences were found between the NIP, DXD and CRX detectors (p>0.05) whereas the PIP detector had significantly lower VGAS (p< 0.0001). Processing did not influence VGAS (p=0.819). Increasing dose resulted in significantly higher VGAS (p<0.0001). Visual grading analysis (VGA) identified a detector air kerma/image (DAK/image) of ~2.4 μGy as an ideal working point for NIP, DXD and DRX detectors. VGAS tracked IQ differences between detectors and dose levels but not image post-processing changes. VGA showed a DAK/image value above which perceived IQ did not improve, potentially useful for commissioning. • A VGA study detects IQ differences between detectors and dose levels. • The NIP detector matched the VGAS of the CsI DR detectors. • VGA data are useful in setting initial detector air kerma level. • Differences in NNPS were consistent with changes in VGAS.

  9. A Comprehensive Analysis of the Physical Properties of Advanced GaAs/AlGaAs Junctions

    NASA Technical Reports Server (NTRS)

    Menkara, Hicham M.

    1996-01-01

    Extensive studies have been performed on MQW junctions and structures because of their potential applications as avalanche photodetectors in optical communications and imaging systems. The role of the avalanche photodiode is to provide for the conversion of an optical signal into charge. Knowledge of junction physics, and the various carrier generation/recombination mechanisms, is crucial for effectively optimizing the conversion process and increasing the structure's quantum efficiency. In addition, the recent interest in the use of APDs in imaging systems has necessitated the development of semiconductor junctions with low dark currents and high gains for low light applications. Because of the high frame rate and high pixel density requirements in new imaging applications, it is necessary to provide some front-end gain in the imager to allow operation under reasonable light conditions. Understanding the electron/hole impact ionization process, as well as diffusion and surface leakage effects, is needed to help maintain low dark currents and high gains for such applications. In addition, the APD must be capable of operating with low power, and low noise. Knowledge of the effects of various doping configurations and electric field profiles, as well as the excess noise resulting from the avalanche process, are needed to help maintain low operating bias and minimize the noise output.

  10. A high performance biometric signal and image processing method to reveal blood perfusion towards 3D oxygen saturation mapping

    NASA Astrophysics Data System (ADS)

    Imms, Ryan; Hu, Sijung; Azorin-Peris, Vicente; Trico, Michaël.; Summers, Ron

    2014-03-01

    Non-contact imaging photoplethysmography (PPG) is a recent development in the field of physiological data acquisition, currently undergoing a large amount of research to characterize and define the range of its capabilities. Contact-based PPG techniques have been broadly used in clinical scenarios for a number of years to obtain direct information about the degree of oxygen saturation for patients. With the advent of imaging techniques, there is strong potential to enable access to additional information such as multi-dimensional blood perfusion and saturation mapping. The further development of effective opto-physiological monitoring techniques is dependent upon novel modelling techniques coupled with improved sensor design and effective signal processing methodologies. The biometric signal and imaging processing platform (bSIPP) provides a comprehensive set of features for extraction and analysis of recorded iPPG data, enabling direct comparison with other biomedical diagnostic tools such as ECG and EEG. Additionally, utilizing information about the nature of tissue structure has enabled the generation of an engineering model describing the behaviour of light during its travel through the biological tissue. This enables the estimation of the relative oxygen saturation and blood perfusion in different layers of the tissue to be calculated, which has the potential to be a useful diagnostic tool.

  11. Handheld hyperspectral imager for standoff detection of chemical and biological aerosols

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Jensen, James O.; McAnally, Gerard

    2004-02-01

    Pacific Advanced Technology has developed a small hand held imaging spectrometer, Sherlock, for gas leak and aerosol detection and imaging. The system is based on a patent technique that uses diffractive optics and image processing algorithms to detect spectral information about objects in the scene of the camera (IMSS Image Multi-spectral Sensing). This camera has been tested at Dugway Proving Ground and Dstl Porton Down facility looking at Chemical and Biological agent simulants. The camera has been used to investigate surfaces contaminated with chemical agent simulants. In addition to Chemical and Biological detection the camera has been used for environmental monitoring of green house gases and is currently undergoing extensive laboratory and field testing by the Gas Technology Institute, British Petroleum and Shell Oil for applications for gas leak detection and repair. The camera contains an embedded Power PC and a real time image processor for performing image processing algorithms to assist in the detection and identification of gas phase species in real time. In this paper we will present an over view of the technology and show how it has performed for different applications, such as gas leak detection, surface contamination, remote sensing and surveillance applications. In addition a sampling of the results form TRE field testing at Dugway in July of 2002 and Dstl at Porton Down in September of 2002 will be given.

  12. A comparison of visual statistics for the image enhancement of FORESITE aerial images with those of major image classes

    NASA Astrophysics Data System (ADS)

    Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.

    2006-05-01

    Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.

  13. A Comparison of Visual Statistics for the Image Enhancement of FORESITE Aerial Images with Those of Major Image Classes

    NASA Technical Reports Server (NTRS)

    Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.

    2006-01-01

    Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.

  14. SIP: A Web-Based Astronomical Image Processing Program

    NASA Astrophysics Data System (ADS)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  15. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  16. Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.

    PubMed

    Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin

    2013-01-01

    A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).

  17. Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology

    PubMed Central

    Wu, Shibin; Xie, Yaoqin

    2013-01-01

    A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072

  18. Understanding Seismic Anisotropy in Hunt Well of Fort McMurray, Canada

    NASA Astrophysics Data System (ADS)

    Malehmir, R.; Schmitt, D. R.; Chan, J.

    2014-12-01

    Seismic imaging plays vital role in geothermal systems as a sustainable energy resource. In this paper, we acquired and processed zero-offset and walk-away VSP and logging as well as surface seismic in Athabasca oil sand area, Alberta. Seismic data were highly processed to make better image geothermal system. Through data processing, properties of natural fractures such as orientation and width were studied and high probable permeable zones were mapped along the deep drilled to the depth of 2363m deep into crystalline basement rocks. In addition to logging data, seismic data were processed to build a reliable image of underground. Velocity analysis in high resolution multi-component walk-away VSP informed us about the elastic anisotropy in place. Study of the natural and induced fracture as well as elastic anisotropy in the seismic data, led us to better map stress regime around the well bore. The seismic image and map of fractures optimizes enhanced geothermal stages through hydraulic stimulation. Keywords: geothermal, anisotropy, VSP, logging, Hunt well, seismic

  19. Collaborative Research and Development (CR&D) III Task Order 0090: Image Processing Framework: From Acquisition and Analysis to Archival Storage

    DTIC Science & Technology

    2013-05-01

    contract or a PhD di sse rtation typically are a " proo f- of-concept" code base that can onl y read a single set of inputs and are not designed ...AFRL-RX-WP-TR-2013-0210 COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) III Task Order 0090: Image Processing Framework: From...public release; distribution unlimited. See additional restrictions described on inside pages. STINFO COPY AIR FORCE RESEARCH LABORATORY

  20. Design and high-volume manufacture of low-cost molded IR aspheres for personal thermal imaging devices

    NASA Astrophysics Data System (ADS)

    Zelazny, A. L.; Walsh, K. F.; Deegan, J. P.; Bundschuh, B.; Patton, E. K.

    2015-05-01

    The demand for infrared optical elements, particularly those made of chalcogenide materials, is rapidly increasing as thermal imaging becomes affordable to the consumer. The use of these materials in conjunction with established lens manufacturing techniques presents unique challenges relative to the cost sensitive nature of this new market. We explore the process from design to manufacture, and discuss the technical challenges involved. Additionally, facets of the development process including manufacturing logistics, packaging, supply chain management, and qualification are discussed.

  1. Ring artifact reduction in synchrotron x-ray tomography through helical acquisition

    NASA Astrophysics Data System (ADS)

    Pelt, Daniël M.; Parkinson, Dilworth Y.

    2018-03-01

    In synchrotron x-ray tomography, systematic defects in certain detector elements can result in arc-shaped artifacts in the final reconstructed image of the scanned sample. These ring artifacts are commonly found in many applications of synchrotron tomography, and can make it difficult or impossible to use the reconstructed image in further analyses. The severity of ring artifacts is often reduced in practice by applying pre-processing on the acquired data, or post-processing on the reconstructed image. However, such additional processing steps can introduce additional artifacts as well, and rely on specific choices of hyperparameter values. In this paper, a different approach to reducing the severity of ring artifacts is introduced: a helical acquisition mode. By moving the sample parallel to the rotation axis during the experiment, the sample is detected at different detector positions in each projection, reducing the effect of systematic errors in detector elements. Alternatively, helical acquisition can be viewed as a way to transform ring artifacts to helix-like artifacts in the reconstructed volume, reducing their severity. We show that data acquired with the proposed mode can be transformed to data acquired with a virtual circular trajectory, enabling further processing of the data with existing software packages for circular data. Results for both simulated data and experimental data show that the proposed method is able to significantly reduce ring artifacts in practice, even compared with popular existing methods, without introducing additional artifacts.

  2. Generative Adversarial Networks: An Overview

    NASA Astrophysics Data System (ADS)

    Creswell, Antonia; White, Tom; Dumoulin, Vincent; Arulkumaran, Kai; Sengupta, Biswa; Bharath, Anil A.

    2018-01-01

    Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.

  3. METEOSAT studies of clouds and radiation budget

    NASA Technical Reports Server (NTRS)

    Saunders, R. W.

    1982-01-01

    Radiation budget studies of the atmosphere/surface system from Meteosat, cloud parameter determination from space, and sea surface temperature measurements from TIROS N data are all described. This work was carried out on the interactive planetary image processing system (IPIPS), which allows interactive manipulationion of the image data in addition to the conventional computational tasks. The current hardware configuration of IPIPS is shown. The I(2)S is the principal interactive display allowing interaction via a trackball, four buttons under program control, or a touch tablet. Simple image processing operations such as contrast enhancing, pseudocoloring, histogram equalization, and multispectral combinations, can all be executed at the push of a button.

  4. Application of imaging and ultrasound to the quality grading of beef

    NASA Astrophysics Data System (ADS)

    Anselmo, V. J.; Gammell, P. M.

    1980-04-01

    The results of a study conducted to assist the Department of Agriculture in the task of considering innovative methods for the grading of carcass beef for human consumption is presented. The processing of photographic, television and ultrasound images of the longissimus dorsi muscle at the 12/13th rib cut was undertaken. The results showed that a correlation could be developed between the quality grade of the carcass as determined by a professional grader, and the fat to area ratio of the muscle as determined by image processing techniques. In addition, the use of ultrasound shows the potential for grading of an unsliced carcass or a live animal.

  5. Application of imaging and ultrasound to the quality grading of beef

    NASA Technical Reports Server (NTRS)

    Anselmo, V. J.; Gammell, P. M.

    1980-01-01

    The results of a study conducted to assist the Department of Agriculture in the task of considering innovative methods for the grading of carcass beef for human consumption is presented. The processing of photographic, television and ultrasound images of the longissimus dorsi muscle at the 12/13th rib cut was undertaken. The results showed that a correlation could be developed between the quality grade of the carcass as determined by a professional grader, and the fat to area ratio of the muscle as determined by image processing techniques. In addition, the use of ultrasound shows the potential for grading of an unsliced carcass or a live animal.

  6. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  7. Real time 3D structural and Doppler OCT imaging on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  8. A quality-refinement process for medical imaging applications.

    PubMed

    Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I

    2009-01-01

    To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.

  9. [Joint correction for motion artifacts and off-resonance artifacts in multi-shot diffusion magnetic resonance imaging].

    PubMed

    Wu, Wenchuan; Fang, Sheng; Guo, Hua

    2014-06-01

    Aiming at motion artifacts and off-resonance artifacts in multi-shot diffusion magnetic resonance imaging (MRI), we proposed a joint correction method in this paper to correct the two kinds of artifacts simultaneously without additional acquisition of navigation data and field map. We utilized the proposed method using multi-shot variable density spiral sequence to acquire MRI data and used auto-focusing technique for image deblurring. We also used direct method or iterative method to correct motion induced phase errors in the process of deblurring. In vivo MRI experiments demonstrated that the proposed method could effectively suppress motion artifacts and off-resonance artifacts and achieve images with fine structures. In addition, the scan time was not increased in applying the proposed method.

  10. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    DOE PAGES

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; ...

    2016-07-08

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain,more » texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. Additionally, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.« less

  11. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy.

    PubMed

    Tremsin, Anton S; Gao, Yan; Dial, Laura C; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.

  12. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    NASA Astrophysics Data System (ADS)

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with 100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.

  13. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.

    Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain,more » texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. Additionally, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components.« less

  14. Investigation of microstructure in additive manufactured Inconel 625 by spatially resolved neutron transmission spectroscopy

    PubMed Central

    Tremsin, Anton S.; Gao, Yan; Dial, Laura C.; Grazzi, Francesco; Shinohara, Takenao

    2016-01-01

    Abstract Non-destructive testing techniques based on neutron imaging and diffraction can provide information on the internal structure of relatively thick metal samples (up to several cm), which are opaque to other conventional non-destructive methods. Spatially resolved neutron transmission spectroscopy is an extension of traditional neutron radiography, where multiple images are acquired simultaneously, each corresponding to a narrow range of energy. The analysis of transmission spectra enables studies of bulk microstructures at the spatial resolution comparable to the detector pixel. In this study we demonstrate the possibility of imaging (with ~100 μm resolution) distribution of some microstructure properties, such as residual strain, texture, voids and impurities in Inconel 625 samples manufactured with an additive manufacturing method called direct metal laser melting (DMLM). Although this imaging technique can be implemented only in a few large-scale facilities, it can be a valuable tool for optimization of additive manufacturing techniques and materials and for correlating bulk microstructure properties to manufacturing process parameters. In addition, the experimental strain distribution can help validate finite element models which many industries use to predict the residual stress distributions in additive manufactured components. PMID:27877885

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soliman, A; Safigholi, H; Sunnybrook Health Sciences Center, Toronto, ON

    Purpose: To propose a new method that provides a positive contrast visualization of the prostate brachytherapy seeds using the phase information from MR images. Additionally, the feasibility of using the processed phase information to distinguish seeds from calcifications is explored. Methods: A gel phantom was constructed using 2% agar dissolved in 1 L of distilled water. Contrast agents were added to adjust the relaxation times. Four iodine-125 (Eckert & Ziegler SML86999) dummy seeds were placed at different orientations with respect to the main magnetic field (B0). Calcifications were obtained from a sheep femur cortical bone due to its close similaritymore » to human bone tissue composition. Five samples of calcifications were shaped into different dimensions with lengths ranging between 1.2 – 6.1 mm.MR imaging was performed on a 3T Philips Achieva using an 8-channel head coil. Eight images were acquired at eight echo-times using a multi-gradient echo sequence. Spatial resolution was 0.7 × 0.7 × 2 mm, TR/TE/dTE = 20.0/2.3/2.3 ms and BW = 541 Hz/pixel. Complex images were acquired and fed into a two-step processing pipeline: the first includes phase unwrapping and background phase removal using Laplacian operator (Wei et al. 2013). The second step applies a specific phase mask on the resulting tissue phase from the first step to provide the desired positive contrast of the seeds and to, potentially, differentiate them from the calcifications. Results: The phase-processing was performed in less than 30 seconds. The proposed method has successfully resulted in a positive contrast of the brachytherapy seeds. Additionally, the final processed phase image showed difference between the appearance of seeds and calcifications. However, the shape of the seeds was slightly distorted compared to the original dimensions. Conclusion: It is feasible to provide a positive contrast of the seeds from MR images using Laplacian operator-based phase processing.« less

  16. Feature evaluation of complex hysteresis smoothing and its practical applications to noisy SEM images.

    PubMed

    Suzuki, Kazuhiko; Oho, Eisaku

    2013-01-01

    Quality of a scanning electron microscopy (SEM) image is strongly influenced by noise. This is a fundamental drawback of the SEM instrument. Complex hysteresis smoothing (CHS) has been previously developed for noise removal of SEM images. This noise removal is performed by monitoring and processing properly the amplitude of the SEM signal. As it stands now, CHS may not be so utilized, though it has several advantages for SEM. For example, the resolution of image processed by CHS is basically equal to that of the original image. In order to find wide application of the CHS method in microscopy, the feature of CHS, which has not been so clarified until now is evaluated correctly. As the application of the result obtained by the feature evaluation, cursor width (CW), which is the sole processing parameter of CHS, is determined more properly using standard deviation of noise Nσ. In addition, disadvantage that CHS cannot remove the noise with excessively large amplitude is improved by a certain postprocessing. CHS is successfully applicable to SEM images with various noise amplitudes. © Wiley Periodicals, Inc.

  17. Counting pollen grains using readily available, free image processing and analysis software.

    PubMed

    Costa, Clayton M; Yang, Suann

    2009-10-01

    Although many methods exist for quantifying the number of pollen grains in a sample, there are few standard methods that are user-friendly, inexpensive and reliable. The present contribution describes a new method of counting pollen using readily available, free image processing and analysis software. Pollen was collected from anthers of two species, Carduus acanthoides and C. nutans (Asteraceae), then illuminated on slides and digitally photographed through a stereomicroscope. Using ImageJ (NIH), these digital images were processed to remove noise and sharpen individual pollen grains, then analysed to obtain a reliable total count of the number of grains present in the image. A macro was developed to analyse multiple images together. To assess the accuracy and consistency of pollen counting by ImageJ analysis, counts were compared with those made by the human eye. Image analysis produced pollen counts in 60 s or less per image, considerably faster than counting with the human eye (5-68 min). In addition, counts produced with the ImageJ procedure were similar to those obtained by eye. Because count parameters are adjustable, this image analysis protocol may be used for many other plant species. Thus, the method provides a quick, inexpensive and reliable solution to counting pollen from digital images, not only reducing the chance of error but also substantially lowering labour requirements.

  18. A novel image enhancement algorithm based on stationary wavelet transform for infrared thermography to the de-bonding defect in solid rocket motors

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Zhang, Wei; Yan, Shaoze

    2015-10-01

    In this paper, a multi-scale image enhancement algorithm based on low-passing filtering and nonlinear transformation is proposed for infrared testing image of the de-bonding defect in solid propellant rocket motors. Infrared testing images with high-level noise and low contrast are foundations for identifying defects and calculating the defects size. In order to improve quality of the infrared image, according to distribution properties of the detection image, within framework of stationary wavelet transform, the approximation coefficients at suitable decomposition level is processed by index low-passing filtering by using Fourier transform, after that, the nonlinear transformation is applied to further process the figure to improve the picture contrast. To verify validity of the algorithm, the image enhancement algorithm is applied to infrared testing pictures of two specimens with de-bonding defect. Therein, one specimen is made of a type of high-strength steel, and the other is a type of carbon fiber composite. As the result shown, in the images processed by the image enhancement algorithm presented in the paper, most of noises are eliminated, and contrast between defect areas and normal area is improved greatly; in addition, by using the binary picture of the processed figure, the continuous defect edges can be extracted, all of which show the validity of the algorithm. The paper provides a well-performing image enhancement algorithm for the infrared thermography.

  19. Optical sectioning in wide-field microscopy obtained by dynamic structured light illumination and detection based on a smart pixel detector array.

    PubMed

    Mitić, Jelena; Anhut, Tiemo; Meier, Matthias; Ducros, Mathieu; Serov, Alexander; Lasser, Theo

    2003-05-01

    Optical sectioning in wide-field microscopy is achieved by illumination of the object with a continuously moving single-spatial-frequency pattern and detecting the image with a smart pixel detector array. This detector performs an on-chip electronic signal processing that extracts the optically sectioned image. The optically sectioned image is directly observed in real time without any additional postprocessing.

  20. 3D correlative light and electron microscopy of cultured cells using serial blockface scanning electron microscopy

    PubMed Central

    Lerner, Thomas R.; Burden, Jemima J.; Nkwe, David O.; Pelchen-Matthews, Annegret; Domart, Marie-Charlotte; Durgan, Joanne; Weston, Anne; Jones, Martin L.; Peddie, Christopher J.; Carzaniga, Raffaella; Florey, Oliver; Marsh, Mark; Gutierrez, Maximiliano G.

    2017-01-01

    ABSTRACT The processes of life take place in multiple dimensions, but imaging these processes in even three dimensions is challenging. Here, we describe a workflow for 3D correlative light and electron microscopy (CLEM) of cell monolayers using fluorescence microscopy to identify and follow biological events, combined with serial blockface scanning electron microscopy to analyse the underlying ultrastructure. The workflow encompasses all steps from cell culture to sample processing, imaging strategy, and 3D image processing and analysis. We demonstrate successful application of the workflow to three studies, each aiming to better understand complex and dynamic biological processes, including bacterial and viral infections of cultured cells and formation of entotic cell-in-cell structures commonly observed in tumours. Our workflow revealed new insight into the replicative niche of Mycobacterium tuberculosis in primary human lymphatic endothelial cells, HIV-1 in human monocyte-derived macrophages, and the composition of the entotic vacuole. The broad application of this 3D CLEM technique will make it a useful addition to the correlative imaging toolbox for biomedical research. PMID:27445312

  1. IEEE International Symposium on Biomedical Imaging.

    PubMed

    2017-01-01

    The IEEE International Symposium on Biomedical Imaging (ISBI) is a scientific conference dedicated to mathematical, algorithmic, and computational aspects of biological and biomedical imaging, across all scales of observation. It fosters knowledge transfer among different imaging communities and contributes to an integrative approach to biomedical imaging. ISBI is a joint initiative from the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS). The 2018 meeting will include tutorials, and a scientific program composed of plenary talks, invited special sessions, challenges, as well as oral and poster presentations of peer-reviewed papers. High-quality papers are requested containing original contributions to the topics of interest including image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological, and statistical modeling. Accepted 4-page regular papers will be published in the symposium proceedings published by IEEE and included in IEEE Xplore. To encourage attendance by a broader audience of imaging scientists and offer additional presentation opportunities, ISBI 2018 will continue to have a second track featuring posters selected from 1-page abstract submissions without subsequent archival publication.

  2. Design of polarization imaging system based on CIS and FPGA

    NASA Astrophysics Data System (ADS)

    Zeng, Yan-an; Liu, Li-gang; Yang, Kun-tao; Chang, Da-ding

    2008-02-01

    As polarization is an important characteristic of light, polarization image detecting is a new image detecting technology of combining polarimetric and image processing technology. Contrasting traditional image detecting in ray radiation, polarization image detecting could acquire a lot of very important information which traditional image detecting couldn't. Polarization image detecting will be widely used in civilian field and military field. As polarization image detecting could resolve some problem which couldn't be resolved by traditional image detecting, it has been researched widely around the world. The paper introduces polarization image detecting in physical theory at first, then especially introduces image collecting and polarization image process based on CIS (CMOS image sensor) and FPGA. There are two parts including hardware and software for polarization imaging system. The part of hardware include drive module of CMOS image sensor, VGA display module, SRAM access module and the real-time image data collecting system based on FPGA. The circuit diagram and PCB was designed. Stokes vector and polarization angle computing method are analyzed in the part of software. The float multiply of Stokes vector is optimized into just shift and addition operation. The result of the experiment shows that real time image collecting system could collect and display image data from CMOS image sensor in real-time.

  3. Robustness analysis of superpixel algorithms to image blur, additive Gaussian noise, and impulse noise

    NASA Astrophysics Data System (ADS)

    Brekhna, Brekhna; Mahmood, Arif; Zhou, Yuanfeng; Zhang, Caiming

    2017-11-01

    Superpixels have gradually become popular in computer vision and image processing applications. However, no comprehensive study has been performed to evaluate the robustness of superpixel algorithms in regard to common forms of noise in natural images. We evaluated the robustness of 11 recently proposed algorithms to different types of noise. The images were corrupted with various degrees of Gaussian blur, additive white Gaussian noise, and impulse noise that either made the object boundaries weak or added extra information to it. We performed a robustness analysis of simple linear iterative clustering (SLIC), Voronoi Cells (VCells), flooding-based superpixel generation (FCCS), bilateral geodesic distance (Bilateral-G), superpixel via geodesic distance (SSS-G), manifold SLIC (M-SLIC), Turbopixels, superpixels extracted via energy-driven sampling (SEEDS), lazy random walk (LRW), real-time superpixel segmentation by DBSCAN clustering, and video supervoxels using partially absorbing random walks (PARW) algorithms. The evaluation process was carried out both qualitatively and quantitatively. For quantitative performance comparison, we used achievable segmentation accuracy (ASA), compactness, under-segmentation error (USE), and boundary recall (BR) on the Berkeley image database. The results demonstrated that all algorithms suffered performance degradation due to noise. For Gaussian blur, Bilateral-G exhibited optimal results for ASA and USE measures, SLIC yielded optimal compactness, whereas FCCS and DBSCAN remained optimal for BR. For the case of additive Gaussian and impulse noises, FCCS exhibited optimal results for ASA, USE, and BR, whereas Bilateral-G remained a close competitor in ASA and USE for Gaussian noise only. Additionally, Turbopixel demonstrated optimal performance for compactness for both types of noise. Thus, no single algorithm was able to yield optimal results for all three types of noise across all performance measures. Conclusively, to solve real-world problems effectively, more robust superpixel algorithms must be developed.

  4. Crater monitoring through social media observations

    NASA Astrophysics Data System (ADS)

    Gialampoukidis, I.; Vrochidis, S.; Kompatsiaris, I.

    2017-09-01

    We have collected more than one lunar image per two days from social media observations. Each one of the collected images has been clustered into two main groups of lunar images and an additional cluster is provided (noise) with pictures that have not been assigned to any cluster. The proposed lunar image clustering process provides two classes of lunar pictures, at different zoom levels; the first showing a clear view of craters grouped into one cluster and the second demonstrating a complete view of the Moon at various phases that are correlated with the crawling date. The clustering stage is unsupervised, so new topics can be detected on-the-fly. We have provided additional sources of planetary images using crowdsourcing information, which is associated with metadata such as time, text, location, links to other users and other related posts. This content has crater information that can be fused with other planetary data to enhance crater monitoring.

  5. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.

    PubMed

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Multi-temporal database of High Resolution Stereo Camera (HRSC) images - Alpha version

    NASA Astrophysics Data System (ADS)

    Erkeling, G.; Luesebrink, D.; Hiesinger, H.; Reiss, D.; Jaumann, R.

    2014-04-01

    Image data transmitted to Earth by Martian spacecraft since the 1970s, for example by Mariner and Viking, Mars Global Surveyor (MGS), Mars Express (MEx) and the Mars Reconnaissance Orbiter (MRO) showed, that the surface of Mars has changed dramatically and actually is continually changing [e.g., 1-8]. The changes are attributed to a large variety of atmospherical, geological and morphological processes, including eolian processes [9,10], mass wasting processes [11], changes of the polar caps [12] and impact cratering processes [13]. In addition, comparisons between Mariner, Viking and Mars Global Surveyor images suggest that more than one third of the Martian surface has brightened or darkened by at least 10% [6]. Albedo changes can have effects on the global heat balance and the circulation of winds, which can result in further surface changes [14-15]. The High Resolution Stereo Camera (HRSC) [16,17] on board Mars Express (MEx) covers large areas at high resolution and is therefore suited to detect the frequency, extent and origin of Martian surface changes. Since 2003 HRSC acquires highresolution images of the Martian surface and contributes to Martian research, with focus on the surface morphology, the geology and mineralogy, the role of liquid water on the surface and in the atmosphere, on volcanism, as well as on the proposed climate change throughout the Martian history and has improved our understanding of the evolution of Mars significantly [18-21]. The HRSC data are available at ESA's Planetary Science Archive (PSA) as well as through the NASA Planetary Data System (PDS). Both data platforms are frequently used by the scientific community and provide additional software and environments to further generate map-projected and geometrically calibrated HRSC data. However, while previews of the images are available, there is no possibility to quickly and conveniently see the spatial and temporal availability of HRSC images in a specific region, which is important to detect the surface changes that occurred between two or more images.

  7. Fine-resolution imaging of solar features using Phase-Diverse Speckle

    NASA Technical Reports Server (NTRS)

    Paxman, Richard G.

    1995-01-01

    Phase-diverse speckle (PDS) is a novel imaging technique intended to overcome the degrading effects of atmospheric turbulence on fine-resolution imaging. As its name suggests, PDS is a blend of phase-diversity and speckle-imaging concepts. PDS reconstructions on solar data were validated by simulation, by demonstrating internal consistency of PDS estimates, and by comparing PDS reconstructions with those produced from well accepted speckle-imaging processing. Several sources of error in data collected with the Swedish Vacuum Solar Telescope (SVST) were simulated: CCD noise, quantization error, image misalignment, and defocus error, as well as atmospheric turbulence model error. The simulations demonstrate that fine-resolution information can be reliably recovered out to at least 70% of the diffraction limit without significant introduction of image artifacts. Additional confidence in the SVST restoration is obtained by comparing its spatial power spectrum with previously-published power spectra derived from both space-based images and earth-based images corrected with traditional speckle-imaging techniques; the shape of the spectrum is found to match well the previous measurements. In addition, the imagery is found to be consistent with, but slightly sharper than, imagery reconstructed with accepted speckle-imaging techniques.

  8. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  9. ACE: Automatic Centroid Extractor for real time target tracking

    NASA Technical Reports Server (NTRS)

    Cameron, K.; Whitaker, S.; Canaris, J.

    1990-01-01

    A high performance video image processor has been implemented which is capable of grouping contiguous pixels from a raster scan image into groups and then calculating centroid information for each object in a frame. The algorithm employed to group pixels is very efficient and is guaranteed to work properly for all convex shapes as well as most concave shapes. Processing speeds are adequate for real time processing of video images having a pixel rate of up to 20 million pixels per second. Pixels may be up to 8 bits wide. The processor is designed to interface directly to a transputer serial link communications channel with no additional hardware. The full custom VLSI processor was implemented in a 1.6 mu m CMOS process and measures 7200 mu m on a side.

  10. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  11. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  12. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  13. 32 CFR 286.30 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... hourly rates). (2) Computer search is based on the total cost of the central processing unit, input... made by Components at the following rates: (1) Minimum charge for office copy (up to six images) $3.50 (2) Each additional image .10 (3) Each typewritten page 3.50 (4) Certification and validation with...

  14. Noise Power Spectrum in PROPELLER MR Imaging.

    PubMed

    Ichinoseki, Yuki; Nagasaka, Tatsuo; Miyamoto, Kota; Tamura, Hajime; Mori, Issei; Machida, Yoshio

    2015-01-01

    The noise power spectrum (NPS), an index for noise evaluation, represents the frequency characteristics of image noise. We measured the NPS in PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) magnetic resonance (MR) imaging, a nonuniform data sampling technique, as an initial study for practical MR image evaluation using the NPS. The 2-dimensional (2D) NPS reflected the k-space sampling density and showed agreement with the shape of the k-space trajectory as expected theoretically. Additionally, the 2D NPS allowed visualization of a part of the image reconstruction process, such as filtering and motion correction.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, S. F.; Izumi, N.; Glenn, S.

    At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. Here, for implosions with temperatures above ~4keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.

  16. Hybrid imaging: a quantum leap in scientific imaging

    NASA Astrophysics Data System (ADS)

    Atlas, Gene; Wadsworth, Mark V.

    2004-01-01

    ImagerLabs has advanced its patented next generation imaging technology called the Hybrid Imaging Technology (HIT) that offers scientific quality performance. The key to the HIT is the merging of the CCD and CMOS technologies through hybridization rather than process integration. HIT offers exceptional QE, fill factor, broad spectral response and very low noise properties of the CCD. In addition, it provides the very high-speed readout, low power, high linearity and high integration capability of CMOS sensors. In this work, we present the benefits, and update the latest advances in the performance of this exciting technology.

  17. Real-Time Symbol Extraction From Grey-Level Images

    NASA Astrophysics Data System (ADS)

    Massen, R.; Simnacher, M.; Rosch, J.; Herre, E.; Wuhrer, H. W.

    1988-04-01

    A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented. This 3 Giga operation per second processor uses large kernel convolvers and new non-linear neighbourhood processing algorithms to compute true 1-pixel wide and noise-free contours without thresholding even from grey-level images with quite varying edge sharpness. The local edge orientation is used as an additional cue to compute a list of vectors describing the closed and open contours in real-time and to dump a CAD-like symbolic image description into a symbol memory at pixel clock rate.

  18. Blind retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  19. Smart image sensors: an emerging key technology for advanced optical measurement and microsystems

    NASA Astrophysics Data System (ADS)

    Seitz, Peter

    1996-08-01

    Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.

  20. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Fast automatic delineation of cardiac volume of interest in MSCT images

    NASA Astrophysics Data System (ADS)

    Lorenz, Cristian; Lessick, Jonathan; Lavi, Guy; Bulow, Thomas; Renisch, Steffen

    2004-05-01

    Computed Tomography Angiography (CTA) is an emerging modality for assessing cardiac anatomy. The delineation of the cardiac volume of interest (VOI) is a pre-processing step for subsequent visualization or image processing. It serves the suppression of anatomic structures being not in the primary focus of the cardiac application, such as sternum, ribs, spinal column, descending aorta and pulmonary vasculature. These structures obliterate standard visualizations such as direct volume renderings or maximum intensity projections. In addition, outcome and performance of post-processing steps such as ventricle suppression, coronary artery segmentation or the detection of short and long axes of the heart can be improved. The structures being part of the cardiac VOI (coronary arteries and veins, myocardium, ventricles and atria) differ tremendously in appearance. In addition, there is no clear image feature associated with the contour (or better cut-surface) distinguishing between cardiac VOI and surrounding tissue making the automatic delineation of the cardiac VOI a difficult task. The presented approach locates in a first step chest wall and descending aorta in all image slices giving a rough estimate of the location of the heart. In a second step, a Fourier based active contour approach delineates slice-wise the border of the cardiac VOI. The algorithm has been evaluated on 41 multi-slice CT data-sets including cases with coronary stents and venous and arterial bypasses. The typical processing time amounts to 5-10s on a 1GHz P3 PC.

  2. Applications of nonlocal means algorithm in low-dose X-ray CT image processing and reconstruction: a review

    PubMed Central

    Zhang, Hao; Zeng, Dong; Zhang, Hua; Wang, Jing; Liang, Zhengrong

    2017-01-01

    Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail. PMID:28303644

  3. Automated analysis of hot spot X-ray images at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Khan, S. F.; Izumi, N.; Glenn, S.; Tommasini, R.; Benedetti, L. R.; Ma, T.; Pak, A.; Kyrala, G. A.; Springer, P.; Bradley, D. K.; Town, R. P. J.

    2016-11-01

    At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ˜4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.

  4. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  5. Automated analysis of hot spot X-ray images at the National Ignition Facility

    DOE PAGES

    Khan, S. F.; Izumi, N.; Glenn, S.; ...

    2016-09-02

    At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. Here, for implosions with temperatures above ~4keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.

  6. Automated analysis of hot spot X-ray images at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, S. F., E-mail: khan9@llnl.gov; Izumi, N.; Glenn, S.

    At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ∼4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.

  7. Automated analysis of hot spot X-ray images at the National Ignition Facility.

    PubMed

    Khan, S F; Izumi, N; Glenn, S; Tommasini, R; Benedetti, L R; Ma, T; Pak, A; Kyrala, G A; Springer, P; Bradley, D K; Town, R P J

    2016-11-01

    At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ∼4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.

  8. Liquid crystal thermography and true-colour digital image processing

    NASA Astrophysics Data System (ADS)

    Stasiek, J.; Stasiek, A.; Jewartowski, M.; Collins, M. W.

    2006-06-01

    In the last decade thermochromic liquid crystals (TLC) and true-colour digital image processing have been successfully used in non-intrusive technical, industrial and biomedical studies and applications. Thin coatings of TLCs at surfaces are utilized to obtain detailed temperature distributions and heat transfer rates for steady or transient processes. Liquid crystals also can be used to make visible the temperature and velocity fields in liquids by the simple expedient of directly mixing the liquid crystal material into the liquid (water, glycerol, glycol, and silicone oils) in very small quantities to use as thermal and hydrodynamic tracers. In biomedical situations e.g., skin diseases, breast cancer, blood circulation and other medical application, TLC and image processing are successfully used as an additional non-invasive diagnostic method especially useful for screening large groups of potential patients. The history of this technique is reviewed, principal methods and tools are described and some examples are also presented.

  9. In-vivo multi-nonlinear optical imaging of a living cell using a supercontinuum light source generated from a photonic crystal fiber

    NASA Astrophysics Data System (ADS)

    Kano, Hideaki; Hamaguchi, Hiro-O.

    2006-04-01

    A supercontinuum light source generated with a femtosecond Ti:Sapphire oscillator has been used to obtain both vibrational and two-photon excitation fluorescence (TPEF) images of a living cell simultaneously at different wavelengths. Owing to an ultrabroadband spectral profile of the supercontinuum, multiple vibrational resonances have been detected through coherent anti-Stokes Raman scattering (CARS) process. In addition to the multiplex CARS process, multiple electronic states can be excited due to the broadband electronic two-photon excitation using the supercontinuum, giving rise to a two-photon excitation fluorescence (TPEF) signal. Using a living yeast cell whose nucleus is labeled by green fluorescent protein (GFP), we have succeeded in visualizing organelles such as mitochondria, septum, and nucleus through the CARS and the TPEF processes. The supercontinuum enables us to perform unique multi-nonlinear optical imaging through two different nonlinear optical processes.

  10. Technical Review: Microscopy and Image Processing Tools to Analyze Plant Chromatin: Practical Considerations.

    PubMed

    Baroux, Célia; Schubert, Veit

    2018-01-01

    In situ nucleus and chromatin analyses rely on microscopy imaging that benefits from versatile, efficient fluorescent probes and proteins for static or live imaging. Yet the broad choice in imaging instruments offered to the user poses orientation problems. Which imaging instrument should be used for which purpose? What are the main caveats and what are the considerations to best exploit each instrument's ability to obtain informative and high-quality images? How to infer quantitative information on chromatin or nuclear organization from microscopy images? In this review, we present an overview of common, fluorescence-based microscopy systems and discuss recently developed super-resolution microscopy systems, which are able to bridge the resolution gap between common fluorescence microscopy and electron microscopy. We briefly present their basic principles and discuss their possible applications in the field, while providing experience-based recommendations to guide the user toward best-possible imaging. In addition to raw data acquisition methods, we discuss commercial and noncommercial processing tools required for optimal image presentation and signal evaluation in two and three dimensions.

  11. An approach to develop an algorithm to detect the climbing height in radial-axial ring rolling

    NASA Astrophysics Data System (ADS)

    Husmann, Simon; Hohmann, Magnus; Kuhlenkötter, Bernd

    2017-10-01

    Radial-axial ring rolling is the mainly used forming process to produce seamless rings, which are applied in miscellaneous industries like the energy sector, the aerospace technology or in the automotive industry. Due to the simultaneously forming in two opposite rolling gaps and the fact that ring rolling is a mass forming process, different errors could occur during the rolling process. Ring climbing is one of the most occurring process errors leading to a distortion of the ring's cross section and a deformation of the rings geometry. The conventional sensors of a radial-axial rolling machine could not detect this error. Therefore, it is a common strategy to roll a slightly bigger ring, so that random occurring process errors could be reduce afterwards by removing the additional material. The LPS installed an image processing system to the radial rolling gap of their ring rolling machine to enable the recognition and measurement of climbing rings and by this, to reduce the additional material. This paper presents the algorithm which enables the image processing system to detect the error of a climbing ring and ensures comparable reliable results for the measurement of the climbing height of the rings.

  12. Accelerating image recognition on mobile devices using GPGPU

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Nykänen, Henri; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku

    2011-01-01

    The future multi-modal user interfaces of battery-powered mobile devices are expected to require computationally costly image analysis techniques. The use of Graphic Processing Units for computing is very well suited for parallel processing and the addition of programmable stages and high precision arithmetic provide for opportunities to implement energy-efficient complete algorithms. At the moment the first mobile graphics accelerators with programmable pipelines are available, enabling the GPGPU implementation of several image processing algorithms. In this context, we consider a face tracking approach that uses efficient gray-scale invariant texture features and boosting. The solution is based on the Local Binary Pattern (LBP) features and makes use of the GPU on the pre-processing and feature extraction phase. We have implemented a series of image processing techniques in the shader language of OpenGL ES 2.0, compiled them for a mobile graphics processing unit and performed tests on a mobile application processor platform (OMAP3530). In our contribution, we describe the challenges of designing on a mobile platform, present the performance achieved and provide measurement results for the actual power consumption in comparison to using the CPU (ARM) on the same platform.

  13. Radar image processing for rock-type discrimination

    NASA Technical Reports Server (NTRS)

    Blom, R. G.; Daily, M.

    1982-01-01

    Image processing and enhancement techniques for improving the geologic utility of digital satellite radar images are reviewed. Preprocessing techniques such as mean and variance correction on a range or azimuth line by line basis to provide uniformly illuminated swaths, median value filtering for four-look imagery to eliminate speckle, and geometric rectification using a priori elevation data. Examples are presented of application of preprocessing methods to Seasat and Landsat data, and Seasat SAR imagery was coregistered with Landsat imagery to form composite scenes. A polynomial was developed to distort the radar picture to fit the Landsat image of a 90 x 90 km sq grid, using Landsat color ratios with Seasat intensities. Subsequent linear discrimination analysis was employed to discriminate rock types from known areas. Seasat additions to the Landsat data improved rock identification by 7%.

  14. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  15. An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues

    PubMed Central

    Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P.; Pajares, Gonzalo; Sanchez del Arco, Maria J.; Navarrete, Luis

    2011-01-01

    Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain). PMID:22163966

  16. High-performance floating-point image computing workstation for medical applications

    NASA Astrophysics Data System (ADS)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.

  17. Nonlocal means-based speckle filtering for ultrasound images

    PubMed Central

    Coupé, Pierrick; Hellier, Pierre; Kervrann, Charles; Barillot, Christian

    2009-01-01

    In image processing, restoration is expected to improve the qualitative inspection of the image and the performance of quantitative image analysis techniques. In this paper, an adaptation of the Non Local (NL-) means filter is proposed for speckle reduction in ultrasound (US) images. Originally developed for additive white Gaussian noise, we propose to use a Bayesian framework to derive a NL-means filter adapted to a relevant ultrasound noise model. Quantitative results on synthetic data show the performances of the proposed method compared to well-established and state-of-the-art methods. Results on real images demonstrate that the proposed method is able to preserve accurately edges and structural details of the image. PMID:19482578

  18. Recent development of nanoparticles for molecular imaging

    NASA Astrophysics Data System (ADS)

    Kim, Jonghoon; Lee, Nohyun; Hyeon, Taeghwan

    2017-10-01

    Molecular imaging enables us to non-invasively visualize cellular functions and biological processes in living subjects, allowing accurate diagnosis of diseases at early stages. For successful molecular imaging, a suitable contrast agent with high sensitivity is required. To date, various nanoparticles have been developed as contrast agents for medical imaging modalities. In comparison with conventional probes, nanoparticles offer several advantages, including controllable physical properties, facile surface modification and long circulation time. In addition, they can be integrated with various combinations for multimodal imaging and therapy. In this opinion piece, we highlight recent advances and future perspectives of nanomaterials for molecular imaging. This article is part of the themed issue 'Challenges for chemistry in molecular imaging'.

  19. Micro-Slit Collimators for X-Ray/Gamma-Ray Imaging

    NASA Technical Reports Server (NTRS)

    Appleby, Michael; Fraser, Iain; Klinger, Jill

    2011-01-01

    A hybrid photochemical-machining process is coupled with precision stack lamination to allow for the fabrication of multiple ultra-high-resolution grids on a single array substrate. In addition, special fixturing and etching techniques have been developed that allow higher-resolution multi-grid collimators to be fabricated. Building on past work of developing a manufacturing technique for fabricating multi-grid, high-resolution coating modulation collimators for arcsecond and subarcsecond x-ray and gamma-ray imaging, the current work reduces the grid pitch by almost a factor of two, down to 22 microns. Additionally, a process was developed for reducing thin, high-Z (tungsten or molybdenum) from the thinnest commercially available foil (25 microns thick) down to approximately equal to 10 microns thick using precisely controlled chemical etching

  20. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  1. Additive Manufacturing Infrared Inspection

    NASA Technical Reports Server (NTRS)

    Gaddy, Darrell

    2014-01-01

    Additive manufacturing is a rapid prototyping technology that allows parts to be built in a series of thin layers from plastic, ceramics, and metallics. Metallic additive manufacturing is an emerging form of rapid prototyping that allows complex structures to be built using various metallic powders. Significant time and cost savings have also been observed using the metallic additive manufacturing compared with traditional techniques. Development of the metallic additive manufacturing technology has advanced significantly over the last decade, although many of the techniques to inspect parts made from these processes have not advanced significantly or have limitations. Several external geometry inspection techniques exist such as Coordinate Measurement Machines (CMM), Laser Scanners, Structured Light Scanning Systems, or even traditional calipers and gages. All of the aforementioned techniques are limited to external geometry and contours or must use a contact probe to inspect limited internal dimensions. This presentation will document the development of a process for real-time dimensional inspection technique and digital quality record of the additive manufacturing process using Infrared camera imaging and processing techniques.

  2. Spectral imaging toolbox: segmentation, hyperstack reconstruction, and batch processing of spectral images for the determination of cell and model membrane lipid order.

    PubMed

    Aron, Miles; Browning, Richard; Carugo, Dario; Sezgin, Erdinc; Bernardino de la Serna, Jorge; Eggeling, Christian; Stride, Eleanor

    2017-05-12

    Spectral imaging with polarity-sensitive fluorescent probes enables the quantification of cell and model membrane physical properties, including local hydration, fluidity, and lateral lipid packing, usually characterized by the generalized polarization (GP) parameter. With the development of commercial microscopes equipped with spectral detectors, spectral imaging has become a convenient and powerful technique for measuring GP and other membrane properties. The existing tools for spectral image processing, however, are insufficient for processing the large data sets afforded by this technological advancement, and are unsuitable for processing images acquired with rapidly internalized fluorescent probes. Here we present a MATLAB spectral imaging toolbox with the aim of overcoming these limitations. In addition to common operations, such as the calculation of distributions of GP values, generation of pseudo-colored GP maps, and spectral analysis, a key highlight of this tool is reliable membrane segmentation for probes that are rapidly internalized. Furthermore, handling for hyperstacks, 3D reconstruction and batch processing facilitates analysis of data sets generated by time series, z-stack, and area scan microscope operations. Finally, the object size distribution is determined, which can provide insight into the mechanisms underlying changes in membrane properties and is desirable for e.g. studies involving model membranes and surfactant coated particles. Analysis is demonstrated for cell membranes, cell-derived vesicles, model membranes, and microbubbles with environmentally-sensitive probes Laurdan, carboxyl-modified Laurdan (C-Laurdan), Di-4-ANEPPDHQ, and Di-4-AN(F)EPPTEA (FE), for quantification of the local lateral density of lipids or lipid packing. The Spectral Imaging Toolbox is a powerful tool for the segmentation and processing of large spectral imaging datasets with a reliable method for membrane segmentation and no ability in programming required. The Spectral Imaging Toolbox can be downloaded from https://uk.mathworks.com/matlabcentral/fileexchange/62617-spectral-imaging-toolbox .

  3. Apodization of spurs in radar receivers using multi-channel processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin W.; Bickel, Douglas L.

    The various technologies presented herein relate to identification and mitigation of spurious energies or signals (aka "spurs") in radar imaging. Spurious energy in received radar data can be a consequence of non-ideal component and circuit behavior. Such behavior can result from I/Q imbalance, nonlinear component behavior, additive interference (e.g. cross-talk, etc.), etc. The manifestation of the spurious energy in a radar image (e.g., a range-Doppler map) can be influenced by appropriate pulse-to-pulse phase modulation. Comparing multiple images which have been processed using the same data but of different signal paths and modulations enables identification of undesired spurs, with subsequent croppingmore » or apodization of the undesired spurs from a radar image. Spurs can be identified by comparison with a threshold energy. Removal of an undesired spur enables enhanced identification of true targets in a radar image.« less

  4. An Automated Blur Detection Method for Histological Whole Slide Imaging

    PubMed Central

    Moles Lopez, Xavier; D'Andrea, Etienne; Barbot, Paul; Bridoux, Anne-Sophie; Rorive, Sandrine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine

    2013-01-01

    Whole slide scanners are novel devices that enable high-resolution imaging of an entire histological slide. Furthermore, the imaging is achieved in only a few minutes, which enables image rendering of large-scale studies involving multiple immunohistochemistry biomarkers. Although whole slide imaging has improved considerably, locally poor focusing causes blurred regions of the image. These artifacts may strongly affect the quality of subsequent analyses, making a slide review process mandatory. This tedious and time-consuming task requires the scanner operator to carefully assess the virtual slide and to manually select new focus points. We propose a statistical learning method that provides early image quality feedback and automatically identifies regions of the image that require additional focus points. PMID:24349343

  5. The Optimized Fabrication of Nanobubbles as Ultrasound Contrast Agents for Tumor Imaging.

    PubMed

    Cai, Wen Bin; Yang, Heng Li; Zhang, Jian; Yin, Ji Kai; Yang, Yi Lin; Yuan, Li Jun; Zhang, Li; Duan, Yun You

    2015-09-03

    Nanobubbles, which have the potential for ultrasonic targeted imaging and treatment in tumors, have been a research focus in recent years. With the current methods, however, the prepared uniformly sized nanobubbles either undergo post-formulation manipulation, such as centrifugation, after the mixture of microbubbles and nanobubbles, or require the addition of amphiphilic surfactants. These processes influence the nanobubble stability, possibly create material waste, and complicate the preparation process. In the present work, we directly prepared uniformly sized nanobubbles by modulating the thickness of a phospholipid film without the purification processes or the addition of amphiphilic surfactants. The fabricated nanobubbles from the optimal phospholipid film thickness exhibited optimal physical characteristics, such as uniform bubble size, good stability, and low toxicity. We also evaluated the enhanced imaging ability of the nanobubbles both in vitro and in vivo. The in vivo enhancement intensity in the tumor was stronger than that of SonoVue after injection (UCA; 2 min: 162.47 ± 8.94 dB vs. 132.11 ± 5.16 dB, P < 0.01; 5 min: 128.38.47 ± 5.06 dB vs. 68.24 ± 2.07 dB, P < 0.01). Thus, the optimal phospholipid film thickness can lead to nanobubbles that are effective for tumor imaging.

  6. The Optimized Fabrication of Nanobubbles as Ultrasound Contrast Agents for Tumor Imaging

    PubMed Central

    Cai, Wen Bin; Yang, Heng Li; Zhang, Jian; Yin, Ji Kai; Yang, Yi Lin; Yuan, Li Jun; Zhang, Li; Duan, Yun You

    2015-01-01

    Nanobubbles, which have the potential for ultrasonic targeted imaging and treatment in tumors, have been a research focus in recent years. With the current methods, however, the prepared uniformly sized nanobubbles either undergo post-formulation manipulation, such as centrifugation, after the mixture of microbubbles and nanobubbles, or require the addition of amphiphilic surfactants. These processes influence the nanobubble stability, possibly create material waste, and complicate the preparation process. In the present work, we directly prepared uniformly sized nanobubbles by modulating the thickness of a phospholipid film without the purification processes or the addition of amphiphilic surfactants. The fabricated nanobubbles from the optimal phospholipid film thickness exhibited optimal physical characteristics, such as uniform bubble size, good stability, and low toxicity. We also evaluated the enhanced imaging ability of the nanobubbles both in vitro and in vivo. The in vivo enhancement intensity in the tumor was stronger than that of SonoVue after injection (UCA; 2 min: 162.47 ± 8.94 dB vs. 132.11 ± 5.16 dB, P < 0.01; 5 min: 128.38.47 ± 5.06 dB vs. 68.24 ± 2.07 dB, P < 0.01). Thus, the optimal phospholipid film thickness can lead to nanobubbles that are effective for tumor imaging. PMID:26333917

  7. Thermal imaging for assessment of electron-beam freeform fabrication (EBF3) additive manufacturing deposits

    NASA Astrophysics Data System (ADS)

    Zalameda, Joseph N.; Burke, Eric R.; Hafley, Robert A.; Taminger, Karen M.; Domack, Christopher S.; Brewer, Amy; Martin, Richard E.

    2013-05-01

    Additive manufacturing is a rapidly growing field where 3-dimensional parts can be produced layer by layer. NASA's electron beam freeform fabrication (EBF3) technology is being evaluated to manufacture metallic parts in a space environment. The benefits of EBF3 technology are weight savings to support space missions, rapid prototyping in a zero gravity environment, and improved vehicle readiness. The EBF3 system is composed of 3 main components: electron beam gun, multi-axis position system, and metallic wire feeder. The electron beam is used to melt the wire and the multi-axis positioning system is used to build the part layer by layer. To insure a quality deposit, a near infrared (NIR) camera is used to image the melt pool and solidification areas. This paper describes the calibration and application of a NIR camera for temperature measurement. In addition, image processing techniques are presented for deposit assessment metrics.

  8. In situ spectroradiometric quantification of ERTS data. [Prescott and Phoenix, Arizona

    NASA Technical Reports Server (NTRS)

    Yost, E. F. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Analyses of ERTS-1 photographic data were made to quantitatively relate ground reflectance measurements to photometric characteristics of the images. Digital image processing of photographic data resulted in a nomograph to correct for atmospheric effects over arid terrain. Optimum processing techniques to derive maximum geologic information from desert areas were established. Additive color techniques to provide quantitative measurements of surface water between different orbits were developed which were accepted as the standard flood mapping techniques using ERTS.

  9. Selections from 2017: Image Processing with AstroImageJ

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2017-12-01

    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry with interactive light curve fitting:plot light curves of a star in real timeCitationKaren A. Collins et al 2017 AJ 153 77. doi:10.3847/1538-3881/153/2/77

  10. MTI science, data products, and ground-data processing overview

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Atkins, William H.; Balick, Lee K.; Borel, Christoph C.; Clodius, William B.; Christensen, R. Wynn; Davis, Anthony B.; Echohawk, J. C.; Galbraith, Amy E.; Hirsch, Karen L.; Krone, James B.; Little, Cynthia K.; McLachlan, Peter M.; Morrison, Aaron; Pollock, Kimberly A.; Pope, Paul A.; Novak, Curtis; Ramsey, Keri A.; Riddle, Emily E.; Rohde, Charles A.; Roussel-Dupre, Diane C.; Smith, Barham W.; Smith, Kathy; Starkovich, Kim; Theiler, James P.; Weber, Paul G.

    2001-08-01

    The mission of the Multispectral Thermal Imager (MTI) satellite is to demonstrate the efficacy of highly accurate multispectral imaging for passive characterization of urban and industrial areas, as well as sites of environmental interest. The satellite makes top-of-atmosphere radiance measurements that are subsequently processed into estimates of surface properties such as vegetation health, temperatures, material composition and others. The MTI satellite also provides simultaneous data for atmospheric characterization at high spatial resolution. To utilize these data the MTI science program has several coordinated components, including modeling, comprehensive ground-truth measurements, image acquisition planning, data processing and data interpretation and analysis. Algorithms have been developed to retrieve a multitude of physical quantities and these algorithms are integrated in a processing pipeline architecture that emphasizes automation, flexibility and programmability. In addition, the MTI science team has produced detailed site, system and atmospheric models to aid in system design and data analysis. This paper provides an overview of the MTI research objectives, data products and ground data processing.

  11. V-Sipal - a Virtual Laboratory for Satellite Image Processing and Analysis

    NASA Astrophysics Data System (ADS)

    Buddhiraju, K. M.; Eeti, L.; Tiwari, K. K.

    2011-09-01

    In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL) being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.

  12. Image denoising and deblurring using multispectral data

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.

    2017-05-01

    Currently decision-making systems get widespread. These systems are based on the analysis video sequences and also additional data. They are volume, change size, the behavior of one or a group of objects, temperature gradient, the presence of local areas with strong differences, and others. Security and control system are main areas of application. A noise on the images strongly influences the subsequent processing and decision making. This paper considers the problem of primary signal processing for solving the tasks of image denoising and deblurring of multispectral data. The additional information from multispectral channels can improve the efficiency of object classification. In this paper we use method of combining information about the objects obtained by the cameras in different frequency bands. We apply method based on simultaneous minimization L2 and the first order square difference sequence of estimates to denoising and restoring the blur on the edges. In case of loss of the information will be applied an approach based on the interpolation of data taken from the analysis of objects located in other areas and information obtained from multispectral camera. The effectiveness of the proposed approach is shown in a set of test images.

  13. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems

    PubMed Central

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel

    2017-01-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725

  14. Research on Hartmann test for progressive addition lenses

    NASA Astrophysics Data System (ADS)

    Qin, Lin-ling; Yu, Jing-chi

    2009-05-01

    Recently, in the world some growing-up measurements for Progressive addition lenses and relevant equipments have been developed. They are single point measurement, moiré deflectometry, Ronchi test techniques. Hartmann test for Progressive addition lenses is proposed in the article. The measurement principle of Hartmann test for ophthalmic lenses and the power compensation of off-axis rays are introduced. The experimental setup used to test lenses is put forward. For experimental test, a spatial filter is used for selecting a clean Gaussian beam; a collimating lens with focal distance f =300 mm is used to produce collimated beam. The Hartmann plate with a square array of holes separated at 2 mm is selected. The selection of laser and CCD camera is critical to the accuracy of experiment and the image processing algorithm. The spot patterns from CCD are obtained from the experimental tests. The power distribution map for lenses can be obtained by image processing in theory. The results indicate that Hartmann test for Progressive addition lenses is convenient and feasible; also its structure is simple.

  15. Three-dimensional rendering of segmented object using matlab - biomed 2010.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2010-01-01

    The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.

  16. A novel design for scintillator-based neutron and gamma imaging in inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Geppert-Kleinrath, Verena; Cutler, Theresa; Danly, Chris; Madden, Amanda; Merrill, Frank; Tybo, Josh; Volegov, Petr; Wilde, Carl

    2017-10-01

    The LANL Advanced Imaging team has been providing reliable 2D neutron imaging of the burning fusion fuel at NIF for years, revealing possible multi-dimensional asymmetries in the fuel shape, and therefore calling for additional views. Adding a passive imaging system using image plate techniques along a new polar line of sight has recently demonstrated the merit of 3D neutron image reconstruction. Now, the team is in the process of designing a new active neutron imaging system for an additional equatorial view. The design will include a gamma imaging system as well, to allow for the imaging of carbon in the ablator of the NIF fuel capsules, constraining the burning fuel shape even further. The selection of ideal scintillator materials for a position-sensitive detector system is the key component for the new design. A comprehensive study of advanced scintillators has been carried out at the Los Alamos Neutron Science Center and the OMEGA Laser Facility in Rochester, NY. Neutron radiography using a fast-gated CCD camera system delivers measurements of resolution, light output and noise characteristics. The measured performance parameters inform the novel design, for which we conclude the feasibility of monolithic scintillators over pixelated counterparts.

  17. Fitting-free algorithm for efficient quantification of collagen fiber alignment in SHG imaging applications.

    PubMed

    Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde

    2017-10-01

    Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.

  18. Constrained Deep Weak Supervision for Histopathology Image Segmentation.

    PubMed

    Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan

    2017-11-01

    In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.

  19. Colony image acquisition and segmentation

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2007-12-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.

  20. Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis

    NASA Astrophysics Data System (ADS)

    Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.

    2016-07-01

    In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.

  1. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  2. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  3. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses.

    PubMed

    Fink, Wolfgang; You, Cindy X; Tarbell, Mark A

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future.

  4. Temporally flickering nanoparticles for compound cellular imaging and super resolution

    NASA Astrophysics Data System (ADS)

    Ilovitsh, Tali; Danan, Yossef; Meir, Rinat; Meiri, Amihai; Zalevsky, Zeev

    2016-03-01

    This work presents the use of flickering nanoparticles for imaging biological samples. The method has high noise immunity, and it enables the detection of overlapping types of GNPs, at significantly sub-diffraction distances, making it attractive for super resolving localization microscopy techniques. The method utilizes a lock-in technique at which the imaging of the sample is done using a time-modulated laser beam that match the number of the types of gold nanoparticles (GNPs) that label a given sample, and resulting in the excitation of the temporal flickering of the scattered light at known temporal frequencies. The final image where the GNPs are spatially separated is obtained using post processing where the proper spectral components corresponding to the different modulation frequencies are extracted. This allows the simultaneous super resolved imaging of multiple types of GNPs that label targets of interest within biological samples. Additionally applying the post-processing algorithm of the K-factor image decomposition algorithm can further improve the performance of the proposed approach.

  5. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    PubMed Central

    Abbasi, Arash; Berry, Jeffrey C.; Callen, Steven T.; Chavez, Leonardo; Doust, Andrew N.; Feldman, Max J.; Gilbert, Kerrigan B.; Hodge, John G.; Hoyer, J. Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning. PMID:29209576

  6. PlantCV v2: Image analysis software for high-throughput plant phenotyping.

    PubMed

    Gehan, Malia A; Fahlgren, Noah; Abbasi, Arash; Berry, Jeffrey C; Callen, Steven T; Chavez, Leonardo; Doust, Andrew N; Feldman, Max J; Gilbert, Kerrigan B; Hodge, John G; Hoyer, J Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony

    2017-01-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.

  7. Potential of PET-MRI for imaging of non-oncologic musculoskeletal disease.

    PubMed

    Kogan, Feliks; Fan, Audrey P; Gold, Garry E

    2016-12-01

    Early detection of musculoskeletal disease leads to improved therapies and patient outcomes, and would benefit greatly from imaging at the cellular and molecular level. As it becomes clear that assessment of multiple tissues and functional processes are often necessary to study the complex pathogenesis of musculoskeletal disorders, the role of multi-modality molecular imaging becomes increasingly important. New positron emission tomography-magnetic resonance imaging (PET-MRI) systems offer to combine high-resolution MRI with simultaneous molecular information from PET to study the multifaceted processes involved in numerous musculoskeletal disorders. In this article, we aim to outline the potential clinical utility of hybrid PET-MRI to these non-oncologic musculoskeletal diseases. We summarize current applications of PET molecular imaging in osteoarthritis (OA), rheumatoid arthritis (RA), metabolic bone diseases and neuropathic peripheral pain. Advanced MRI approaches that reveal biochemical and functional information offer complementary assessment in soft tissues. Additionally, we discuss technical considerations for hybrid PET-MR imaging including MR attenuation correction, workflow, radiation dose, and quantification.

  8. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  9. PlantCV v2: Image analysis software for high-throughput plant phenotyping

    DOE PAGES

    Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash; ...

    2017-12-01

    Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less

  10. Advantages and Disadvantages in Image Processing with Free Software in Radiology.

    PubMed

    Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan

    2018-01-15

    Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.

  11. Real-space post-processing correction of thermal drift and piezoelectric actuator nonlinearities in scanning tunneling microscope images.

    PubMed

    Yothers, Mitchell P; Browder, Aaron E; Bumm, Lloyd A

    2017-01-01

    We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.

  12. Real-space post-processing correction of thermal drift and piezoelectric actuator nonlinearities in scanning tunneling microscope images

    NASA Astrophysics Data System (ADS)

    Yothers, Mitchell P.; Browder, Aaron E.; Bumm, Lloyd A.

    2017-01-01

    We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.

  13. ED breast cases and other breast emergencies.

    PubMed

    Khadem, Nasim; Reddy, Sravanthi; Lee, Sandy; Larsen, Linda; Walker, Daphne

    2016-02-01

    Patients with pathologic processes of the breast commonly present in the Emergency Department (ED). Familiarity with the imaging and management of the most common entities is essential for the radiologist. Additionally, it is important to understand the limitations of ED imaging and management in the acute setting and to recognize when referrals to a specialty breast center are necessary. The goal of this article is to review the clinical presentations, pathophysiology, imaging, and management of emergency breast cases and common breast pathology seen in the ED.

  14. WHOLE BODY NONRIGID CT-PET REGISTRATION USING WEIGHTED DEMONS.

    PubMed

    Suh, J W; Kwon, Oh-K; Scheinost, D; Sinusas, A J; Cline, Gary W; Papademetris, X

    2011-03-30

    We present a new registration method for whole-body rat computed tomography (CT) image and positron emission tomography (PET) images using a weighted demons algorithm. The CT and PET images are acquired in separate scanners at different times and the inherent differences in the imaging protocols produced significant nonrigid changes between the two acquisitions in addition to heterogeneous image characteristics. In this situation, we utilized both the transmission-PET and the emission-PET images in the deformable registration process emphasizing particular regions of the moving transmission-PET image using the emission-PET image. We validated our results with nine rat image sets using M-Hausdorff distance similarity measure. We demonstrate improved performance compared to standard methods such as Demons and normalized mutual information-based non-rigid FFD registration.

  15. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  16. Faxed document image restoration method based on local pixel patterns

    NASA Astrophysics Data System (ADS)

    Akiyama, Teruo; Miyamoto, Nobuo; Oguro, Masami; Ogura, Kenji

    1998-04-01

    A method for restoring degraded faxed document images using the patterns of pixels that construct small areas in a document is proposed. The method effectively restores faxed images that contain the halftone textures and/or density salt-and-pepper noise that degrade OCR system performance. The halftone image restoration process, white-centered 3 X 3 pixels, in which black-and-white pixels alternate, are identified first using the distribution of the pixel values as halftone textures, and then the white center pixels are inverted to black. To remove high-density salt- and-pepper noise, it is assumed that the degradation is caused by ill-balanced bias and inappropriate thresholding of the sensor output which results in the addition of random noise. Restored image can be estimated using an approximation that uses the inverse operation of the assumed original process. In order to process degraded faxed images, the algorithms mentioned above are combined. An experiment is conducted using 24 especially poor quality examples selected from data sets that exemplify what practical fax- based OCR systems cannot handle. The maximum recovery rate in terms of mean square error was 98.8 percent.

  17. Single-Scale Fusion: An Effective Approach to Merging Images.

    PubMed

    Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C

    2017-01-01

    Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.

  18. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  19. Imaging energy landscapes with concentrated diffusing colloidal probes

    NASA Astrophysics Data System (ADS)

    Bahukudumbi, Pradipkumar; Bevan, Michael A.

    2007-06-01

    The ability to locally interrogate interactions between particles and energetically patterned surfaces provides essential information to design, control, and optimize template directed self-assembly processes. Although numerous techniques are capable of characterizing local physicochemical surface properties, no current method resolves interactions between colloids and patterned surfaces on the order of the thermal energy kT, which is the inherent energy scale of equilibrium self-assembly processes. Here, the authors describe video microscopy measurements and an inverse Monte Carlo analysis of diffusing colloidal probes as a means to image three dimensional free energy and potential energy landscapes due to physically patterned surfaces. In addition, they also develop a consistent analysis of self-diffusion in inhomogeneous fluids of concentrated diffusing probes on energy landscapes, which is important to the temporal imaging process and to self-assembly kinetics. Extension of the concepts developed in this work suggests a general strategy to image multidimensional and multiscale physical, chemical, and biological surfaces using a variety of diffusing probes (i.e., molecules, macromolecules, nanoparticles, and colloids).

  20. The Landsat Data Continuity Mission Operational Land Imager (OLI) Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian L.; Dabney, Philip W.; Murphy-Morris, Jeanine E.; Knight, Edward J.; Kvaran, Geir; Barsi, Julia A.

    2010-01-01

    The Operational Land Imager (OLI) on the Landsat Data Continuity Mission (LDCM) has a comprehensive radiometric characterization and calibration program beginning with the instrument design, and extending through integration and test, on-orbit operations and science data processing. Key instrument design features for radiometric calibration include dual solar diffusers and multi-lamped on-board calibrators. The radiometric calibration transfer procedure from NIST standards has multiple checks on the radiometric scale throughout the process and uses a heliostat as part of the transfer to orbit of the radiometric calibration. On-orbit lunar imaging will be used to track the instruments stability and side slither maneuvers will be used in addition to the solar diffuser to flat field across the thousands of detectors per band. A Calibration Validation Team is continuously involved in the process from design to operations. This team uses an Image Assessment System (IAS), part of the ground system to characterize and calibrate the on-orbit data.

  1. Linear landmark extraction in SAR images with application to augmented integrity aero-navigation: an overview to a novel processing chain

    NASA Astrophysics Data System (ADS)

    Fabbrini, L.; Messina, M.; Greco, M.; Pinelli, G.

    2011-10-01

    In the context of augmented integrity Inertial Navigation System (INS), recent technological developments have been focusing on landmark extraction from high-resolution synthetic aperture radar (SAR) images in order to retrieve aircraft position and attitude. The article puts forward a processing chain that can automatically detect linear landmarks on highresolution synthetic aperture radar (SAR) images and can be successfully exploited also in the context of augmented integrity INS. The processing chain uses constant false alarm rate (CFAR) edge detectors as the first step of the whole processing procedure. Our studies confirm that the ratio of averages (RoA) edge detector detects object boundaries more effectively than Student T-test and Wilcoxon-Mann-Whitney (WMW) test. Nevertheless, all these statistical edge detectors are sensitive to violation of the assumptions which underlie their theory. In addition to presenting a solution to the previous problem, we put forward a new post-processing algorithm useful to remove the main false alarms, to select the most probable edge position, to reconstruct broken edges and finally to vectorize them. SAR images from the "MSTAR clutter" dataset were used to prove the effectiveness of the proposed algorithms.

  2. Fast reversible wavelet image compressor

    NASA Astrophysics Data System (ADS)

    Kim, HyungJun; Li, Ching-Chung

    1996-10-01

    We present a unified image compressor with spline biorthogonal wavelets and dyadic rational filter coefficients which gives high computational speed and excellent compression performance. Convolutions with these filters can be preformed by using only arithmetic shifting and addition operations. Wavelet coefficients can be encoded with an arithmetic coder which also uses arithmetic shifting and addition operations. Therefore, from the beginning to the end, the while encoding/decoding process can be done within a short period of time. The proposed method naturally extends form the lossless compression to the lossy but high compression range and can be easily adapted to the progressive reconstruction.

  3. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    NASA Astrophysics Data System (ADS)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  4. Looking back to inform the future: The role of cognition in forest disturbance characterization from remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel Anne

    Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.

  5. Compressive Sensing Image Sensors-Hardware Implementation

    PubMed Central

    Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram

    2013-01-01

    The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123

  6. Large-Scale Image Analytics Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.

    2014-12-01

    High resolution land cover classification maps are needed to increase the accuracy of current Land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agricultural Imaging Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with ~60 million pixels) and has a total size of ~100 terabytes for a single acquisition. Features extracted from the entire dataset would amount to ~8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. In order to perform image analytics in such a granular system, it is mandatory to devise an intelligent archiving and query system for image retrieval, file structuring, metadata processing and filtering of all available image scenes. Using the Open NASA Earth Exchange (NEX) initiative, which is a partnership with Amazon Web Services (AWS), we have developed an end-to-end architecture for designing the database and the deep belief network (following the distbelief computing model) to solve a grand challenge of scaling this process across quarter million NAIP tiles that cover the entire Continental United States. The AWS core components that we use to solve this problem are DynamoDB along with S3 for database query and storage, ElastiCache shared memory architecture for image segmentation, Elastic Map Reduce (EMR) for image feature extraction, and the memory optimized Elastic Cloud Compute (EC2) for the learning algorithm.

  7. The Practical Application of Uav-Based Photogrammetry Under Economic Aspects

    NASA Astrophysics Data System (ADS)

    Sauerbier, M.; Siegrist, E.; Eisenbeiss, H.; Demir, N.

    2011-09-01

    Nowadays, small size UAVs (Unmanned Aerial Vehicles) have reached a level of practical reliability and functionality that enables this technology to enter the geomatics market as an additional platform for spatial data acquisition. Though one could imagine a wide variety of interesting sensors to be mounted on such a device, here we will focus on photogrammetric applications using digital cameras. In praxis, UAV-based photogrammetry will only be accepted if it a) provides the required accuracy and an additional value and b) if it is competitive in terms of economic application compared to other measurement technologies. While a) was already proven by the scientific community and results were published comprehensively during the last decade, b) still has to be verified under real conditions. For this purpose, a test data set representing a realistic scenario provided by ETH Zurich was used to investigate cost effectiveness and to identify weak points in the processing chain that require further development. Our investigations are limited to UAVs carrying digital consumer cameras, for larger UAVs equipped with medium format cameras the situation has to be considered as significantly different. Image data was acquired during flights using a microdrones MD4-1000 quadrocopter equipped with an Olympus PE-1 digital compact camera. From these images, a subset of 5 images was selected for processing in order to register the effort of time required for the whole production chain of photogrammetric products. We see the potential of mini UAV-based photogrammetry mainly in smaller areas, up to a size of ca. 100 hectares. Larger areas can be efficiently covered by small airplanes with few images, reducing processing effort drastically. In case of smaller areas of a few hectares only, it depends more on the products required. UAVs can be an enhancement or alternative to GNSS measurements, terrestrial laser scanning and ground based photogrammetry. We selected the above mentioned test data from a project featuring an area of interest within the practical range for mini UAVs. While flight planning and flight operation are already quite efficient processes, the bottlenecks identified are mainly related to image processing. Although we used specific software for image processing, the identified gaps in the processing chain today are valid for most commercial photogrammetric software systems on the market. An outlook proposing improvements for a practicable workflow applicable in projects in private economy will be given.

  8. Optical smart packaging to reduce transmitted information.

    PubMed

    Cabezas, Luisa; Tebaldi, Myrian; Barrera, John Fredy; Bolognini, Néstor; Torroba, Roberto

    2012-01-02

    We demonstrate a smart image-packaging optical technique that uses what we believe is a new concept to save byte space when transmitting data. The technique supports a large set of images mapped into modulated speckle patterns. Then, they are multiplexed into a single package. This operation results in a substantial decreasing of the final amount of bytes of the package with respect to the amount resulting from the addition of the images without using the method. Besides, there are no requirements on the type of images to be processed. We present results that proof the potentiality of the technique.

  9. Minimisation of Signal Intensity Differences in Distortion Correction Approaches of Brain Magnetic Resonance Diffusion Tensor Imaging.

    PubMed

    Lee, Dong-Hoon; Lee, Do-Wan; Henry, David; Park, Hae-Jin; Han, Bong-Soo; Woo, Dong-Cheol

    2018-04-12

    To evaluate the effects of signal intensity differences between the b0 image and diffusion tensor imaging (DTI) in the image registration process. To correct signal intensity differences between the b0 image and DTI data, a simple image intensity compensation (SIMIC) method, which is a b0 image re-calculation process from DTI data, was applied before the image registration. The re-calculated b0 image (b0 ext ) from each diffusion direction was registered to the b0 image acquired through the MR scanning (b0 nd ) with two types of cost functions and their transformation matrices were acquired. These transformation matrices were then used to register the DTI data. For quantifications, the dice similarity coefficient (DSC) values, diffusion scalar matrix, and quantified fibre numbers and lengths were calculated. The combined SIMIC method with two cost functions showed the highest DSC value (0.802 ± 0.007). Regarding diffusion scalar values and numbers and lengths of fibres from the corpus callosum, superior longitudinal fasciculus, and cortico-spinal tract, only using normalised cross correlation (NCC) showed a specific tendency toward lower values in the brain regions. Image-based distortion correction with SIMIC for DTI data would help in image analysis by accounting for signal intensity differences as one additional option for DTI analysis. • We evaluated the effects of signal intensity differences at DTI registration. • The non-diffusion-weighted image re-calculation process from DTI data was applied. • SIMIC can minimise the signal intensity differences at DTI registration.

  10. Implementation of quality assurance in diagnostic radiology in Bosnia and Herzegovina (Republic of Srpska).

    PubMed

    Bosnjak, J; Ciraj-Bjelac, O; Strbac, B

    2008-01-01

    Application of a quality control (QC) programme is very important when optimisation of image quality and reduction of patient exposure is desired. QC surveys of diagnostics imaging equipment in Republic of Srpska (entity of Bosnia and Herzegovina) has been systematically performed since 2001. The presented results are mostly related to the QC test results of X-ray tubes and generators for diagnostic radiology units in 92 radiology departments. In addition, results include workplace monitoring and usage of personal protective devices for staff and patients. Presented results showed the improvements in the implementation of the QC programme within the period 2001--2005. Also, more attention is given to appropriate maintenance of imaging equipment, which was one of the main problems in the past. Implementation of a QC programme is a continuous and complex process. To achieve good performance of imaging equipment, additional tests are to be introduced, along with image quality assessment and patient dosimetry. Training is very important in order to achieve these goals.

  11. Imaging of DNA and Protein by SFM and Combined SFM-TIRF Microscopy.

    PubMed

    Grosbart, Małgorzata; Ristić, Dejan; Sánchez, Humberto; Wyman, Claire

    2018-01-01

    Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nm resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.

  12. Sample preparation for SFM imaging of DNA, proteins, and DNA-protein complexes.

    PubMed

    Ristic, Dejan; Sanchez, Humberto; Wyman, Claire

    2011-01-01

    Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate, and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nanometer resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA-bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA, and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.

  13. The detective quantum efficiency of photon-counting x-ray detectors using cascaded-systems analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanguay, Jesse; Yun, Seungman; School of Mechanical Engineering, Pusan National University, Jangjeon-dong, Geumjeong-gu, Busan 609-735

    Purpose: Single-photon counting (SPC) x-ray imaging has the potential to improve image quality and enable new advanced energy-dependent methods. The purpose of this study is to extend cascaded-systems analyses (CSA) to the description of image quality and the detective quantum efficiency (DQE) of SPC systems. Methods: Point-process theory is used to develop a method of propagating the mean signal and Wiener noise-power spectrum through a thresholding stage (required to identify x-ray interaction events). The new transfer relationships are used to describe the zero-frequency DQE of a hypothetical SPC detector including the effects of stochastic conversion of incident photons to secondarymore » quanta, secondary quantum sinks, additive noise, and threshold level. Theoretical results are compared with Monte Carlo calculations assuming the same detector model. Results: Under certain conditions, the CSA approach can be applied to SPC systems with the additional requirement of propagating the probability density function describing the total number of image-forming quanta through each stage of a cascaded model. Theoretical results including DQE show excellent agreement with Monte Carlo calculations under all conditions considered. Conclusions: Application of the CSA method shows that false counts due to additive electronic noise results in both a nonlinear image signal and increased image noise. There is a window of allowable threshold values to achieve a high DQE that depends on conversion gain, secondary quantum sinks, and additive noise.« less

  14. A three-image algorithm for hard x-ray grating interferometry.

    PubMed

    Pelliccia, Daniele; Rigon, Luigi; Arfelli, Fulvia; Menk, Ralf-Hendrik; Bukreeva, Inna; Cedola, Alessia

    2013-08-12

    A three-image method to extract absorption, refraction and scattering information for hard x-ray grating interferometry is presented. The method comprises a post-processing approach alternative to the conventional phase stepping procedure and is inspired by a similar three-image technique developed for analyzer-based x-ray imaging. Results obtained with this algorithm are quantitatively comparable with phase-stepping. This method can be further extended to samples with negligible scattering, where only two images are needed to separate absorption and refraction signal. Thanks to the limited number of images required, this technique is a viable route to bio-compatible imaging with x-ray grating interferometer. In addition our method elucidates and strengthens the formal and practical analogies between grating interferometry and the (non-interferometric) diffraction enhanced imaging technique.

  15. Recent Applications of Neutron Imaging Methods

    NASA Astrophysics Data System (ADS)

    Lehmann, E.; Mannes, D.; Kaestner, A.; Grünzweig, C.

    The methodical progress in the field of neutron imaging is visible in general but on different levels in the particular labs. Consequently, the access to most suitable beam ports, the usage of advanced imaging detector systems and the professional image processing made the technique competitive to other non-destructive tools like X-ray imaging. Based on this performance gain and by new methodical approaches several new application fields came up - in addition to the already established ones. Accordingly, new image data are now mostly in the third dimension available in the format of tomography volumes. The radiography mode is still the basis of neutron imaging, but the extracted information from superimposed image data (like for a grating interferometer) enables completely new insights. In the consequence, many new applications were created.

  16. Optical asymmetric image encryption using gyrator wavelet transform

    NASA Astrophysics Data System (ADS)

    Mehra, Isha; Nishchal, Naveen K.

    2015-11-01

    In this paper, we propose a new optical information processing tool termed as gyrator wavelet transform to secure a fully phase image, based on amplitude- and phase-truncation approach. The gyrator wavelet transform constitutes four basic parameters; gyrator transform order, type and level of mother wavelet, and position of different frequency bands. These parameters are used as encryption keys in addition to the random phase codes to the optical cryptosystem. This tool has also been applied for simultaneous compression and encryption of an image. The system's performance and its sensitivity to the encryption parameters, such as, gyrator transform order, and robustness has also been analyzed. It is expected that this tool will not only update current optical security systems, but may also shed some light on future developments. The computer simulation results demonstrate the abilities of the gyrator wavelet transform as an effective tool, which can be used in various optical information processing applications, including image encryption, and image compression. Also this tool can be applied for securing the color image, multispectral, and three-dimensional images.

  17. Quantitative imaging methods in osteoporosis.

    PubMed

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G

    2016-12-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.

  18. High resolution imaging of objects located within a wall

    NASA Astrophysics Data System (ADS)

    Greneker, Eugene F.; Showman, Gregory A.; Trostel, John M.; Sylvester, Vincent

    2006-05-01

    Researchers at Georgia Tech Research Institute have developed a high resolution imaging radar technique that allows large sections of a test wall to be scanned in X and Y dimensions. The resulting images that can be obtained provide information on what is inside the wall, if anything. The scanning homodyne radar operates at a frequency of 24.1 GHz at with an output power level of approximately 10 milliwatts. An imaging technique that has been developed is currently being used to study the detection of toxic mold on the back surface of wallboard using radar as a sensor. The moisture that is associated with the mold can easily be detected. In addition to mold, the technique will image objects as small as a 4 millimeter sphere on the front or rear of the wallboard and will penetrate both sides of a wall made of studs and wallboard. Signal processing is performed on the resulting data to further sharpen the image. Photos of the scanner and images produced by the scanner are presented. A discussion of the signal processing and technical challenges are also discussed.

  19. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration

    PubMed Central

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.

    2014-01-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030

  20. Massive ovarian edema, due to adjacent appendicitis.

    PubMed

    Callen, Andrew L; Illangasekare, Tushani; Poder, Liina

    2017-04-01

    Massive ovarian edema is a benign clinical entity, the imaging findings of which can mimic an adnexal mass or ovarian torsion. In the setting of acute abdominal pain, identifying massive ovarian edema is a key in avoiding potential fertility-threatening surgery in young women. In addition, it is important to consider other contributing pathology when ovarian edema is secondary to another process. We present a case of a young woman presenting with subacute abdominal pain, whose initial workup revealed marked enlarged right ovary. Further imaging, diagnostic tests, and eventually diagnostic laparoscopy revealed that the ovarian enlargement was secondary to subacute appendicitis, rather than a primary adnexal process. We review the classic ultrasound and MRI imaging findings and pitfalls that relate to this diagnosis.

  1. GUIs in the MIDAS environment

    NASA Technical Reports Server (NTRS)

    Ballester, P.

    1992-01-01

    MIDAS (Munich Image Data Analysis System) is the image processing system developed at ESO for astronomical data reduction. MIDAS is used for off-line data reduction at ESO and many astronomical institutes all over Europe. In addition to a set of general commands, enabling to process and analyze images, catalogs, graphics and tables, MIDAS includes specialized packages dedicated to astronomical applications or to specific ESO instruments. Several graphical interfaces are available in the MIDAS environment: XHelp provides an interactive help facility, and XLong and XEchelle enable data reduction of long-slip and echelle spectra. GUI builders facilitate the development of interfaces. All ESO interfaces comply to the ESO User Interfaces Common Conventions which secures an identical look and feel for telescope operations, data analysis, and archives.

  2. A mobile ferromagnetic shape detection sensor using a Hall sensor array and magnetic imaging.

    PubMed

    Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah

    2011-01-01

    This paper presents a mobile Hall sensor array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the mobile Hall sensor array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of mobile Hall sensor array system for actual shape detection. The results prove that the mobile Hall sensor array system is able to perform magnetic imaging in identifying various ferromagnetic materials.

  3. A Mobile Ferromagnetic Shape Detection Sensor Using a Hall Sensor Array and Magnetic Imaging

    PubMed Central

    Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah

    2011-01-01

    This paper presents a Mobile Hall Sensor Array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the Mobile Hall Sensor Array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of Mobile Hall Sensor Array system for actual shape detection. The results prove that the Mobile Hall Sensor Array system is able to perform magnetic imaging in identifying various ferromagnetic materials. PMID:22346653

  4. A novel configurable VLSI architecture design of window-based image processing method

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  5. Platform for Postprocessing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don

    2008-01-01

    Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).

  6. On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration

    PubMed Central

    Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun

    2016-01-01

    Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287

  7. Evaluation and testing of image quality of the Space Solar Extreme Ultraviolet Telescope

    NASA Astrophysics Data System (ADS)

    Peng, Jilong; Yi, Zhong; Zhou, Shuhong; Yu, Qian; Hou, Yinlong; Wang, Shanshan

    2018-01-01

    For the space solar extreme ultraviolet telescope, the star point test can not be performed in the x-ray band (19.5nm band) as there is not light source of bright enough. In this paper, the point spread function of the optical system is calculated to evaluate the imaging performance of the telescope system. Combined with the actual processing surface error, such as small grinding head processing and magnetorheological processing, the optical design software Zemax and data analysis software Matlab are used to directly calculate the system point spread function of the space solar extreme ultraviolet telescope. Matlab codes are programmed to generate the required surface error grid data. These surface error data is loaded to the specified surface of the telescope system by using the communication technique of DDE (Dynamic Data Exchange), which is used to connect Zemax and Matlab. As the different processing methods will lead to surface error with different size, distribution and spatial frequency, the impact of imaging is also different. Therefore, the characteristics of the surface error of different machining methods are studied. Combining with its position in the optical system and simulation its influence on the image quality, it is of great significance to reasonably choose the processing technology. Additionally, we have also analyzed the relationship between the surface error and the image quality evaluation. In order to ensure the final processing of the mirror to meet the requirements of the image quality, we should choose one or several methods to evaluate the surface error according to the different spatial frequency characteristics of the surface error.

  8. How the blind "see" Braille: lessons from functional magnetic resonance imaging.

    PubMed

    Sadato, Norihiro

    2005-12-01

    What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques, such as functional magnetic resonance imaging, have enabled exploration of the neural substrates of Braille reading. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas.

  9. Random bits, true and unbiased, from atmospheric turbulence

    PubMed Central

    Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo

    2014-01-01

    Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499

  10. Linear information retrieval method in X-ray grating-based phase contrast imaging and its interchangeability with tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Gao, K.; Wang, Z. L.; Shao, Q. G.; Hu, R. F.; Wei, C. X.; Zan, G. B.; Wali, F.; Luo, R. H.; Zhu, P. P.; Tian, Y. C.

    2017-06-01

    In X-ray grating-based phase contrast imaging, information retrieval is necessary for quantitative research, especially for phase tomography. However, numerous and repetitive processes have to be performed for tomographic reconstruction. In this paper, we report a novel information retrieval method, which enables retrieving phase and absorption information by means of a linear combination of two mutually conjugate images. Thanks to the distributive law of the multiplication as well as the commutative law and associative law of the addition, the information retrieval can be performed after tomographic reconstruction, thus simplifying the information retrieval procedure dramatically. The theoretical model of this method is established in both parallel beam geometry for Talbot interferometer and fan beam geometry for Talbot-Lau interferometer. Numerical experiments are also performed to confirm the feasibility and validity of the proposed method. In addition, we discuss its possibility in cone beam geometry and its advantages compared with other methods. Moreover, this method can also be employed in other differential phase contrast imaging methods, such as diffraction enhanced imaging, non-interferometric imaging, and edge illumination.

  11. A Computer-Aided Diagnosis System for Measuring Carotid Artery Intima-Media Thickness (IMT) Using Quaternion Vectors.

    PubMed

    Kutbay, Uğurhan; Hardalaç, Fırat; Akbulut, Mehmet; Akaslan, Ünsal; Serhatlıoğlu, Selami

    2016-06-01

    This study aims investigating adjustable distant fuzzy c-means segmentation on carotid Doppler images, as well as quaternion-based convolution filters and saliency mapping procedures. We developed imaging software that will simplify the measurement of carotid artery intima-media thickness (IMT) on saliency mapping images. Additionally, specialists evaluated the present images and compared them with saliency mapping images. In the present research, we conducted imaging studies of 25 carotid Doppler images obtained by the Department of Cardiology at Fırat University. After implementing fuzzy c-means segmentation and quaternion-based convolution on all Doppler images, we obtained a model that can be analyzed easily by the doctors using a bottom-up saliency model. These methods were applied to 25 carotid Doppler images and then interpreted by specialists. In the present study, we used color-filtering methods to obtain carotid color images. Saliency mapping was performed on the obtained images, and the carotid artery IMT was detected and interpreted on the obtained images from both methods and the raw images are shown in Results. Also these results were investigated by using Mean Square Error (MSE) for the raw IMT images and the method which gives the best performance is the Quaternion Based Saliency Mapping (QBSM). 0,0014 and 0,000191 mm(2) MSEs were obtained for artery lumen diameters and plaque diameters in carotid arteries respectively. We found that computer-based image processing methods used on carotid Doppler could aid doctors' in their decision-making process. We developed software that could ease the process of measuring carotid IMT for cardiologists and help them to evaluate their findings.

  12. The Use of Multiple Data Sources in the Process of Topographic Maps Updating

    NASA Astrophysics Data System (ADS)

    Cantemir, A.; Visan, A.; Parvulescu, N.; Dogaru, M.

    2016-06-01

    The methods used in the process of updating maps have evolved and become more complex, especially upon the development of the digital technology. At the same time, the development of technology has led to an abundance of available data that can be used in the updating process. The data sources came in a great variety of forms and formats from different acquisition sensors. Satellite images provided by certain satellite missions are now available on space agencies portals. Images stored in archives of satellite missions such us Sentinel, Landsat and other can be downloaded free of charge.The main advantages are represented by the large coverage area and rather good spatial resolution that enables the use of these images for the map updating at an appropriate scale. In our study we focused our research of these images on 1: 50.000 scale map. DEM that are globally available could represent an appropriate input for watershed delineation and stream network generation, that can be used as support for hydrography thematic layer update. If, in addition to remote sensing aerial photogrametry and LiDAR data are ussed, the accuracy of data sources is enhanced. Ortophotoimages and Digital Terrain Models are the main products that can be used for feature extraction and update. On the other side, the use of georeferenced analogical basemaps represent a significant addition to the process. Concerning the thematic maps, the classic representation of the terrain by contour lines derived from DTM, remains the best method of surfacing the earth on a map, nevertheless the correlation with other layers such as Hidrography are mandatory. In the context of the current national coverage of the Digital Terrain Model, one of the main concerns of the National Center of Cartography, through the Cartography and Photogrammetry Department, is represented by the exploitation of the available data in order to update the layers of the Topographic Reference Map 1:5000, known as TOPRO5 and at the same time, through the generalization and additional data sources of the Romanian 1:50 000 scale map. This paper also investigates the general perspective of DTM automatic use derived products in the process of updating the topographic maps.

  13. Enhancement of sun-tracking with optoelectronic devices

    NASA Astrophysics Data System (ADS)

    Wu, Jiunn-Chi

    2015-09-01

    Sun-tracking is one of the most challenging tasks in implementing CPV. In order to justify the additional complexity of sun-tracking, careful assessment of performance of CPV by monitoring the performance of sun-tracking is vital. Measurement of accuracy of sun-tracking is one of the important tasks in an outdoor test. This study examines techniques with three optoelectronic devices (i.e. position sensitive device (PSD), CCD and webcam). Outdoor measurements indicated that during sunny days (global horizontal insolation (GHI) > 700 W/m2), three devices recorded comparable tracking accuracy of 0.16˜0.3°. The method using a PSD has fastest sampling rate and is able to detect the sun's position without additional image processing. Yet, it cannot identify the sunlight effectively during low insolation. The techniques with a CCD and a webcam enhance the accuracy of centroid of sunlight via the optical lens and image processing. The image quality acquired using a webcam and a CCD is comparable but the webcam is more affordable than that of CCD because it can be assembled with consumer-graded products.

  14. Imaging Systems for Size Measurements of Debrisat Fragments

    NASA Technical Reports Server (NTRS)

    Shiotani, B.; Scruggs, T.; Toledo, R.; Fitz-Coy, N.; Liou, J. C.; Sorge, M.; Huynh, T.; Opiela, J.; Krisko, P.; Cowardin, H.

    2017-01-01

    The overall objective of the DebriSat project is to provide data to update existing standard spacecraft breakup models. One of the key sets of parameters used in these models is the physical dimensions of the fragments (i.e., length, average-cross sectional area, and volume). For the DebriSat project, only fragments with at least one dimension greater than 2 mm are collected and processed. Additionally, a significant portion of the fragments recovered from the impact test are needle-like and/or flat plate-like fragments where their heights are almost negligible in comparison to their other dimensions. As a result, two fragment size categories were defined: 2D objects and 3D objects. While measurement systems are commercially available, factors such as measurement rates, system adaptability, size characterization limitations and equipment costs presented significant challenges to the project and a decision was made to develop our own size characterization systems. The size characterization systems consist of two automated image systems, one referred to as the 3D imaging system and the other as the 2D imaging system. Which imaging system to use depends on the classification of the fragment being measured. Both imaging systems utilize point-and-shoot cameras for object image acquisition and create representative point clouds of the fragments. The 3D imaging system utilizes a space-carving algorithm to generate a 3D point cloud, while the 2D imaging system utilizes an edge detection algorithm to generate a 2D point cloud. From the point clouds, the three largest orthogonal dimensions are determined using a convex hull algorithm. For 3D objects, in addition to the three largest orthogonal dimensions, the volume is computed via an alpha-shape algorithm applied to the point clouds. The average cross-sectional area is also computed for 3D objects. Both imaging systems have automated size measurements (image acquisition and image processing) driven by the need to quickly and accurately measure tens of thousands of debris fragments. Moreover, the automated size measurement reduces potential fragment damage/mishandling and ability for accuracy and repeatability. As the fragment characterization progressed, it became evident that the imaging systems had to be revised. For example, an additional view was added to the 2D imaging system to capture the height of the 2D object. This paper presents the DebriSat project's imaging systems and calculation techniques in detail; from design and development to maturation. The experiences and challenges are also shared.

  15. Sequential Superresolution Imaging of Multiple Targets Using a Single Fluorophore

    PubMed Central

    Lidke, Diane S.; Lidke, Keith A.

    2015-01-01

    Fluorescence superresolution (SR) microscopy, or fluorescence nanoscopy, provides nanometer scale detail of cellular structures and allows for imaging of biological processes at the molecular level. Specific SR imaging methods, such as localization-based imaging, rely on stochastic transitions between on (fluorescent) and off (dark) states of fluorophores. Imaging multiple cellular structures using multi-color imaging is complicated and limited by the differing properties of various organic dyes including their fluorescent state duty cycle, photons per switching event, number of fluorescent cycles before irreversible photobleaching, and overall sensitivity to buffer conditions. In addition, multiple color imaging requires consideration of multiple optical paths or chromatic aberration that can lead to differential aberrations that are important at the nanometer scale. Here, we report a method for sequential labeling and imaging that allows for SR imaging of multiple targets using a single fluorophore with negligible cross-talk between images. Using brightfield image correlation to register and overlay multiple image acquisitions with ~10 nm overlay precision in the x-y imaging plane, we have exploited the optimal properties of AlexaFluor647 for dSTORM to image four distinct cellular proteins. We also visualize the changes in co-localization of the epidermal growth factor (EGF) receptor and clathrin upon EGF addition that are consistent with clathrin-mediated endocytosis. These results are the first to demonstrate sequential SR (s-SR) imaging using direct stochastic reconstruction microscopy (dSTORM), and this method for sequential imaging can be applied to any superresolution technique. PMID:25860558

  16. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    NASA Astrophysics Data System (ADS)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  17. A new visual feedback-based magnetorheological haptic master for robot-assisted minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Choi, Seung-Hyun; Kim, Soomin; Kim, Pyunghwa; Park, Jinhyuk; Choi, Seung-Bok

    2015-06-01

    In this study, we developed a novel four-degrees-of-freedom haptic master using controllable magnetorheological (MR) fluid. We also integrated the haptic master with a vision device with image processing for robot-assisted minimally invasive surgery (RMIS). The proposed master can be used in RMIS as a haptic interface to provide the surgeon with a sense of touch by using both kinetic and kinesthetic information. The slave robot, which is manipulated with a proportional-integrative-derivative controller, uses a force sensor to obtain the desired forces from tissue contact, and these desired repulsive forces are then embodied through the MR haptic master. To verify the effectiveness of the haptic master, the desired force and actual force are compared in the time domain. In addition, a visual feedback system is implemented in the RMIS experiment to distinguish between the tumor and organ more clearly and provide better visibility to the operator. The hue-saturation-value color space is adopted for the image processing since it is often more intuitive than other color spaces. The image processing and haptic feedback are realized on surgery performance. In this work, tumor-cutting experiments are conducted under four different operating conditions: haptic feedback on, haptic feedback off, image processing on, and image processing off. The experimental realization shows that the performance index, which is a function of pixels, is different in the four operating conditions.

  18. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  19. Video Image Tracking Engine

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)

    2004-01-01

    A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).

  20. Prewarping techniques in imaging: applications in nanotechnology and biotechnology

    NASA Astrophysics Data System (ADS)

    Poonawala, Amyn; Milanfar, Peyman

    2005-03-01

    In all imaging systems, the underlying process introduces undesirable distortions that cause the output signal to be a warped version of the input. When the input to such systems can be controlled, pre-warping techniques can be employed which consist of systematically modifying the input such that it cancels out (or compensates for) the process losses. In this paper, we focus on the mask (reticle) design problem for 'optical micro-lithography', a process similar to photographic printing used for transferring binary circuit patterns onto silicon wafers. We use a pixel-based mask representation and model the above process as a cascade of convolution (aerial image formation) and thresholding (high-contrast recording) operations. The pre-distorted mask is obtained by minimizing the norm of the difference between the 'desired' output image and the 'reproduced' output image. We employ the regularization framework to ensure that the resulting masks are close-to-binary as well as simple and easy to fabricate. Finally, we provide insight into two additional applications of pre-warping techniques. First is 'e-beam lithography', used for fabricating nano-scale structures, and second is 'electronic visual prosthesis' which aims at providing limited vision to the blind by using a prosthetic retinally implanted chip capable of electrically stimulating the retinal neuron cells.

  1. Quantitative proton imaging from multiple physics processes: a proof of concept

    NASA Astrophysics Data System (ADS)

    Bopp, C.; Rescigno, R.; Rousseau, M.; Brasse, D.

    2015-07-01

    Proton imaging is developed in order to improve the accuracy of charged particle therapy treatment planning. It makes it possible to directly map the relative stopping powers of the materials using the information on the energy loss of the protons. In order to reach a satisfactory spatial resolution in the reconstructed images, the position and direction of each particle is recorded upstream and downstream from the patient. As a consequence of individual proton detection, information on the transmission rate and scattering of the protons is available. Image reconstruction processes are proposed to make use of this information. A proton tomographic acquisition of an anthropomorphic head phantom was simulated. The transmission rate of the particles was used to reconstruct a map of the macroscopic cross section for nuclear interactions of the materials. A two-step iterative reconstruction process was implemented to reconstruct a map of the inverse scattering length of the materials using the scattering of the protons. Results indicate that, while the reconstruction processes should be optimized, it is possible to extract quantitative information from the transmission rate and scattering of the protons. This suggests that proton imaging could provide additional knowledge on the materials that may be of use to further improve treatment planning.

  2. Quantitative proton imaging from multiple physics processes: a proof of concept.

    PubMed

    Bopp, C; Rescigno, R; Rousseau, M; Brasse, D

    2015-07-07

    Proton imaging is developed in order to improve the accuracy of charged particle therapy treatment planning. It makes it possible to directly map the relative stopping powers of the materials using the information on the energy loss of the protons. In order to reach a satisfactory spatial resolution in the reconstructed images, the position and direction of each particle is recorded upstream and downstream from the patient. As a consequence of individual proton detection, information on the transmission rate and scattering of the protons is available. Image reconstruction processes are proposed to make use of this information. A proton tomographic acquisition of an anthropomorphic head phantom was simulated. The transmission rate of the particles was used to reconstruct a map of the macroscopic cross section for nuclear interactions of the materials. A two-step iterative reconstruction process was implemented to reconstruct a map of the inverse scattering length of the materials using the scattering of the protons. Results indicate that, while the reconstruction processes should be optimized, it is possible to extract quantitative information from the transmission rate and scattering of the protons. This suggests that proton imaging could provide additional knowledge on the materials that may be of use to further improve treatment planning.

  3. Lateralized interactive social content and valence processing within the human amygdala

    PubMed Central

    Vrtička, Pascal; Sander, David; Vuilleumier, Patrik

    2013-01-01

    In the past, the amygdala has generally been conceptualized as a fear-processing module. Recently, however, it has been proposed to respond to all stimuli that are relevant with respect to the current needs, goals, and values of an individual. This raises the question of whether the human amygdala may differentiate between separate kinds of relevance. A distinction between emotional (vs. neutral) and social (vs. non-social) relevance is supported by previous studies showing that the human amygdala preferentially responds to both emotionally and socially significant information, and these factors might even display interactive encoding properties. However, no investigation has yet probed a full 2 (positive vs. negative valence) × 2 (social vs. non-social content) processing pattern, with neutral images as an additional baseline. Applying such an extended orthogonal factorial design, our fMRI study demonstrates that the human amygdala is (1) more strongly activated for neutral social vs. non-social information, (2) activated at a similar level when viewing social positive or negative images, but (3) displays a valence effect (negative vs. positive) for non-social images. In addition, this encoding pattern is not influenced by cognitive or behavioral emotion regulation mechanisms, and displays a hemispheric lateralization with more pronounced effects on the right side. Finally, the same valence × social content interaction was found in three additional cortical regions, namely the right fusiform gyrus, right anterior superior temporal gyrus, and medial orbitofrontal cortex. Overall, these findings suggest that valence and social content processing represent distinct kinds of relevance that interact within the human amygdala as well as in a more extensive cortical network, likely subserving a key role in relevance detection. PMID:23346054

  4. Rapid Disaster Damage Estimation

    NASA Astrophysics Data System (ADS)

    Vu, T. T.

    2012-07-01

    The experiences from recent disaster events showed that detailed information derived from high-resolution satellite images could accommodate the requirements from damage analysts and disaster management practitioners. Richer information contained in such high-resolution images, however, increases the complexity of image analysis. As a result, few image analysis solutions can be practically used under time pressure in the context of post-disaster and emergency responses. To fill the gap in employment of remote sensing in disaster response, this research develops a rapid high-resolution satellite mapping solution built upon a dual-scale contextual framework to support damage estimation after a catastrophe. The target objects are building (or building blocks) and their condition. On the coarse processing level, statistical region merging deployed to group pixels into a number of coarse clusters. Based on majority rule of vegetation index, water and shadow index, it is possible to eliminate the irrelevant clusters. The remaining clusters likely consist of building structures and others. On the fine processing level details, within each considering clusters, smaller objects are formed using morphological analysis. Numerous indicators including spectral, textural and shape indices are computed to be used in a rule-based object classification. Computation time of raster-based analysis highly depends on the image size or number of processed pixels in order words. Breaking into 2 level processing helps to reduce the processed number of pixels and the redundancy of processing irrelevant information. In addition, it allows a data- and tasks- based parallel implementation. The performance is demonstrated with QuickBird images captured a disaster-affected area of Phanga, Thailand by the 2004 Indian Ocean tsunami are used for demonstration of the performance. The developed solution will be implemented in different platforms as well as a web processing service for operational uses.

  5. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-01

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.

  6. Detecting tympanostomy tubes from otoscopic images via offline and online training.

    PubMed

    Wang, Xin; Valdez, Tulio A; Bi, Jinbo

    2015-06-01

    Tympanostomy tube placement has been commonly used nowadays as a surgical treatment for otitis media. Following the placement, regular scheduled follow-ups for checking the status of the tympanostomy tubes are important during the treatment. The complexity of performing the follow up care mainly lies on identifying the presence and patency of the tympanostomy tube. An automated tube detection program will largely reduce the care costs and enhance the clinical efficiency of the ear nose and throat specialists and general practitioners. In this paper, we develop a computer vision system that is able to automatically detect a tympanostomy tube in an otoscopic image of the ear drum. The system comprises an offline classifier training process followed by a real-time refinement stage performed at the point of care. The offline training process constructs a three-layer cascaded classifier with each layer reflecting specific characteristics of the tube. The real-time refinement process enables the end users to interact and adjust the system over time based on their otoscopic images and patient care. The support vector machine (SVM) algorithm has been applied to train all of the classifiers. Empirical evaluation of the proposed system on both high quality hospital images and low quality internet images demonstrates the effectiveness of the system. The offline classifier trained using 215 images could achieve a 90% accuracy in terms of classifying otoscopic images with and without a tympanostomy tube, and then the real-time refinement process could improve the classification accuracy by 3-5% based on additional 20 images. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Quantitative phase imaging of arthropods

    PubMed Central

    Sridharan, Shamira; Katz, Aron; Soto-Adames, Felipe; Popescu, Gabriel

    2015-01-01

    Abstract. Classification of arthropods is performed by characterization of fine features such as setae and cuticles. An unstained whole arthropod specimen mounted on a slide can be preserved for many decades, but is difficult to study since current methods require sample manipulation or tedious image processing. Spatial light interference microscopy (SLIM) is a quantitative phase imaging (QPI) technique that is an add-on module to a commercial phase contrast microscope. We use SLIM to image a whole organism springtail Ceratophysella denticulata mounted on a slide. This is the first time, to our knowledge, that an entire organism has been imaged using QPI. We also demonstrate the ability of SLIM to image fine structures in addition to providing quantitative data that cannot be obtained by traditional bright field microscopy. PMID:26334858

  8. Fast Fourier transform-based Retinex and alpha-rooting color image enhancement

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.; Gonzales, Analysa M.

    2015-05-01

    Efficiency in terms of both accuracy and speed is highly important in any system, especially when it comes to image processing. The purpose of this paper is to improve an existing implementation of multi-scale retinex (MSR) by utilizing the fast Fourier transforms (FFT) within the illumination estimation step of the algorithm to improve the speed at which Gaussian blurring filters were applied to the original input image. In addition, alpha-rooting can be used as a separate technique to achieve a sharper image in order to fuse its results with those of the retinex algorithm for the sake of achieving the best image possible as shown by the values of the considered color image enhancement measure (EMEC).

  9. Noninvasive imaging of experimental lung fibrosis.

    PubMed

    Zhou, Yong; Chen, Huaping; Ambalavanan, Namasivayam; Liu, Gang; Antony, Veena B; Ding, Qiang; Nath, Hrudaya; Eary, Janet F; Thannickal, Victor J

    2015-07-01

    Small animal models of lung fibrosis are essential for unraveling the molecular mechanisms underlying human fibrotic lung diseases; additionally, they are useful for preclinical testing of candidate antifibrotic agents. The current end-point measures of experimental lung fibrosis involve labor-intensive histological and biochemical analyses. These measures fail to account for dynamic changes in the disease process in individual animals and are limited by the need for large numbers of animals for longitudinal studies. The emergence of noninvasive imaging technologies provides exciting opportunities to image lung fibrosis in live animals as often as needed and to longitudinally track the efficacy of novel antifibrotic compounds. Data obtained by noninvasive imaging provide complementary information to histological and biochemical measurements. In addition, the use of noninvasive imaging in animal studies reduces animal usage, thus satisfying animal welfare concerns. In this article, we review these new imaging modalities with the potential for evaluation of lung fibrosis in small animal models. Such techniques include micro-computed tomography (micro-CT), magnetic resonance imaging, positron emission tomography (PET), single photon emission computed tomography (SPECT), and multimodal imaging systems including PET/CT and SPECT/CT. It is anticipated that noninvasive imaging will be increasingly used in animal models of fibrosis to gain insights into disease pathogenesis and as preclinical tools to assess drug efficacy.

  10. Active imaging with the aids of polarization retrieve in turbid media system

    NASA Astrophysics Data System (ADS)

    Tao, Qiangqiang; Sun, Yongxuan; Shen, Fei; Xu, Qiang; Gao, Jun; Guo, Zhongyi

    2016-01-01

    We propose a novel active imaging based on the polarization retrieve (PR) method in turbid media system. In our simulations, the Monte Carlo (MC) algorithm has been used to investigate the scattering process between the incident photons and the scattering particles, and the visually concordant object but with different polarization characteristics in different regions, has been selected as the original target that is placed in the turbid media. Under linearly and circularly polarized illuminations, the simulation results demonstrate that the corresponding polarization properties can provide additional information for the imaging, and the contrast of the polarization image can also be enhanced greatly compared to the simplex intensity image in the turbid media. Besides, the polarization image adjusted by the PR method can further enhance the visibility and contrast. In addition, by PR imaging method, with the increasing particles' size in Mie's scale, the visibility can be enhanced, because of the increased forward scattering effect. In general, in the same circumstance, the circular polarization images can offer a better contrast and visibility than that of linear ones. The results indicate that the PR imaging method is more applicable to the scattering media system with relatively larger particles such as aerosols, heavy fog, cumulus, and seawater, as well as to biological tissues and blood media.

  11. OSM-Classic : An optical imaging technique for accurately determining strain

    NASA Astrophysics Data System (ADS)

    Aldrich, Daniel R.; Ayranci, Cagri; Nobes, David S.

    OSM-Classic is a program designed in MATLAB® to provide a method of accurately determining strain in a test sample using an optical imaging technique. Measuring strain for the mechanical characterization of materials is most commonly performed with extensometers, LVDT (linear variable differential transistors), and strain gauges; however, these strain measurement methods suffer from their fragile nature and it is not particularly easy to attach these devices to the material for testing. To alleviate these potential problems, an optical approach that does not require contact with the specimen can be implemented to measure the strain. OSM-Classic is a software that interrogates a series of images to determine elongation in a test sample and hence, strain of the specimen. It was designed to provide a graphical user interface that includes image processing with a dynamic region of interest. Additionally, the stain is calculated directly while providing active feedback during the processing.

  12. Medical image processing using neural networks based on multivalued and universal binary neurons

    NASA Astrophysics Data System (ADS)

    Aizenberg, Igor N.; Aizenberg, Naum N.; Gotko, Eugen S.; Sochka, Vladimir A.

    1998-06-01

    Cellular Neural Networks (CNN) has become a very good mean for solution of the different kind of image processing problems. CNN based on multi-valued neurons (CNN-MVN) and CNN based on universal binary neurons (CNN-UBN) are the specific kinds of the CNN. MVN and UBN are neurons with complex-valued weights, and complex internal arithmetic. Their main feature is possibility of implementation of the arbitrary mapping between inputs and output described by the MVN, and arbitrary (not only threshold) Boolean function (UBN). Great advantage of the CNN is possibility of implementation of the any linear and many non-linear filters in spatial domain. Together with noise removing using CNN it is possible to implement filters, which can amplify high and medium frequencies. These filters are a very good mean for solution of the enhancement problem, and problem of details extraction against complex background. So, CNN make it possible to organize all the processing process from filtering until extraction of the important details. Organization of this process for medical image processing is considered in the paper. A major attention will be concentrated on the processing of the x-ray and ultrasound images corresponding to different oncology (or closed to oncology) pathologies. Additionally we will consider new structure of the neural network for solution of the problem of differential diagnostics of breast cancer.

  13. 70 nm resolution in subsurface optical imaging of silicon integrated-circuits using pupil-function engineering

    NASA Astrophysics Data System (ADS)

    Serrels, K. A.; Ramsay, E.; Reid, D. T.

    2009-02-01

    We present experimental evidence for the resolution-enhancing effect of an annular pupil-plane aperture when performing nonlinear imaging in the vectorial-focusing regime through manipulation of the focal spot geometry. By acquiring two-photon optical beam-induced current images of a silicon integrated-circuit using solid-immersion-lens microscopy at 1550 nm we achieved 70 nm resolution. This result demonstrates a reduction in the minimum effective focal spot diameter of 36%. In addition, the annular-aperture-induced extension of the depth-of-focus causes an observable decrease in the depth contrast of the resulting image and we explain the origins of this using a simulation of the imaging process.

  14. Multiscale hidden Markov models for photon-limited imaging

    NASA Astrophysics Data System (ADS)

    Nowak, Robert D.

    1999-06-01

    Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.

  15. Incoherent Diffractive Imaging via Intensity Correlations of Hard X Rays

    NASA Astrophysics Data System (ADS)

    Classen, Anton; Ayyer, Kartik; Chapman, Henry N.; Röhlsberger, Ralf; von Zanthier, Joachim

    2017-08-01

    Established x-ray diffraction methods allow for high-resolution structure determination of crystals, crystallized protein structures, or even single molecules. While these techniques rely on coherent scattering, incoherent processes like fluorescence emission—often the predominant scattering mechanism—are generally considered detrimental for imaging applications. Here, we show that intensity correlations of incoherently scattered x-ray radiation can be used to image the full 3D arrangement of the scattering atoms with significantly higher resolution compared to conventional coherent diffraction imaging and crystallography, including additional three-dimensional information in Fourier space for a single sample orientation. We present a number of properties of incoherent diffractive imaging that are conceptually superior to those of coherent methods.

  16. Color engineering in the age of digital convergence

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay W.

    1998-09-01

    Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.

  17. CT image segmentation methods for bone used in medical additive manufacturing.

    PubMed

    van Eijnatten, Maureen; van Dijk, Roelof; Dobbe, Johannes; Streekstra, Geert; Koivisto, Juha; Wolff, Jan

    2018-01-01

    The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in two different conditions: static (control) and fluid shear stress. The proposed methodology exhibited higher sensitivity values and similar accuracy compared to state-of-the-art methods. PMID:27551746

  19. Reconstruction of biofilm images: combining local and global structural parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk

    2014-10-20

    Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parametersmore » into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.« less

  20. MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?

    PubMed

    Deledalle, Charles-Alban; Denis, Loic; Tabti, Sonia; Tupin, Florence

    2017-09-01

    Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.

  1. Identification tibia and fibula bone fracture location using scanline algorithm

    NASA Astrophysics Data System (ADS)

    Muchtar, M. A.; Simanjuntak, S. E.; Rahmat, R. F.; Mawengkang, H.; Zarlis, M.; Sitompul, O. S.; Winanto, I. D.; Andayani, U.; Syahputra, M. F.; Siregar, I.; Nasution, T. H.

    2018-03-01

    Fracture is a condition that there is a damage in the continuity of the bone, usually caused by stress, trauma or weak bones. The tibia and fibula are two separated-long bones in the lower leg, closely linked at the knee and ankle. Tibia/fibula fracture often happen when there is too much force applied to the bone that it can withstand. One of the way to identify the location of tibia/fibula fracture is to read X-ray image manually. Visual examination requires more time and allows for errors in identification due to the noise in image. In addition, reading X-ray needs highlighting background to make the objects in X-ray image appear more clearly. Therefore, a method is required to help radiologist to identify the location of tibia/fibula fracture. We propose some image-processing techniques for processing cruris image and Scan line algorithm for the identification of fracture location. The result shows that our proposed method is able to identify it and reach up to 87.5% of accuracy.

  2. Image Processing for Bioluminescence Resonance Energy Transfer Measurement-BRET-Analyzer.

    PubMed

    Chastagnier, Yan; Moutin, Enora; Hemonnot, Anne-Laure; Perroy, Julie

    2017-01-01

    A growing number of tools now allow live recordings of various signaling pathways and protein-protein interaction dynamics in time and space by ratiometric measurements, such as Bioluminescence Resonance Energy Transfer (BRET) Imaging. Accurate and reproducible analysis of ratiometric measurements has thus become mandatory to interpret quantitative imaging. In order to fulfill this necessity, we have developed an open source toolset for Fiji- BRET-Analyzer -allowing a systematic analysis, from image processing to ratio quantification. We share this open source solution and a step-by-step tutorial at https://github.com/ychastagnier/BRET-Analyzer. This toolset proposes (1) image background subtraction, (2) image alignment over time, (3) a composite thresholding method of the image used as the denominator of the ratio to refine the precise limits of the sample, (4) pixel by pixel division of the images and efficient distribution of the ratio intensity on a pseudocolor scale, and (5) quantification of the ratio mean intensity and standard variation among pixels in chosen areas. In addition to systematize the analysis process, we show that the BRET-Analyzer allows proper reconstitution and quantification of the ratiometric image in time and space, even from heterogeneous subcellular volumes. Indeed, analyzing twice the same images, we demonstrate that compared to standard analysis BRET-Analyzer precisely define the luminescent specimen limits, enlightening proficient strengths from small and big ensembles over time. For example, we followed and quantified, in live, scaffold proteins interaction dynamics in neuronal sub-cellular compartments including dendritic spines, for half an hour. In conclusion, BRET-Analyzer provides a complete, versatile and efficient toolset for automated reproducible and meaningful image ratio analysis.

  3. Cortical Enhanced Tissue Segmentation of Neonatal Brain MR Images Acquired by a Dedicated Phased Array Coil

    PubMed Central

    Shi, Feng; Yap, Pew-Thian; Fan, Yong; Cheng, Jie-Zhi; Wald, Lawrence L.; Gerig, Guido; Lin, Weili; Shen, Dinggang

    2010-01-01

    The acquisition of high quality MR images of neonatal brains is largely hampered by their characteristically small head size and low tissue contrast. As a result, subsequent image processing and analysis, especially for brain tissue segmentation, are often hindered. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by effectively combing images obtained from 8 coil elements without lengthening data acquisition time. In addition, a subject-specific atlas based tissue segmentation algorithm is specifically developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in neonatal images with a Hessian filter for generation of cortical GM prior. Then, the prior is combined with our neonatal population atlas to form a cortical enhanced hybrid atlas, which we refer to as the subject-specific atlas. Various experiments are conducted to compare the proposed method with manual segmentation results, as well as with additional two population atlas based segmentation methods. Results show that the proposed method is capable of segmenting the neonatal brain with the highest accuracy, compared to other two methods. PMID:20862268

  4. Helioviewer.org: Browsing Very Large Image Archives Online Using JPEG 2000

    NASA Astrophysics Data System (ADS)

    Hughitt, V. K.; Ireland, J.; Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Schmidt, L.; Wamsler, B.; Beck, J.; Alexanderian, A.; Fleck, B.

    2009-12-01

    As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Recent efforts have resulted in increased performance, dynamic movie generation, and improved support for mobile web browsers. Future functionality will include: support for additional data-sources including RHESSI, SDO, STEREO, and TRACE, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.

  5. Geologic controls of erosion and sedimentation on Mars

    NASA Technical Reports Server (NTRS)

    Tanaka, K. L.; Dohm, J. M.; Carr, M. H.

    1993-01-01

    Because Mars has had a history of diverse erosional and depositional styles, a variety of erosional landforms and sedimentary deposits can be seen on Viking orbiter images. Here we review how geologic processes involving rock, water, and structure have controlled erosion and sedimentation on Mars. Additionally, we review how further studies will help refine our understanding of these processes.

  6. Medical imaging and registration in computer assisted surgery.

    PubMed

    Simon, D A; Lavallée, S

    1998-09-01

    Imaging, sensing, and computing technologies that are being introduced to aid in the planning and execution of surgical procedures are providing orthopaedic surgeons with a powerful new set of tools for improving clinical accuracy, reliability, and patient outcomes while reducing costs and operating times. Current computer assisted surgery systems typically include a measurement process for collecting patient specific medical data, a decision making process for generating a surgical plan, a registration process for aligning the surgical plan to the patient, and an action process for accurately achieving the goals specified in the plan. Some of the key concepts in computer assisted surgery applied to orthopaedics with a focus on the basic framework and underlying technologies is outlined. In addition, technical challenges and future trends in the field are discussed.

  7. Computer vision for microscopy diagnosis of malaria.

    PubMed

    Tek, F Boray; Dempster, Andrew G; Kale, Izzet

    2009-07-13

    This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.

  8. Diagnostic value of radiological imaging pre- and post-drainage of pleural effusions.

    PubMed

    Corcoran, John P; Acton, Louise; Ahmed, Asia; Hallifax, Robert J; Psallidas, Ioannis; Wrightson, John M; Rahman, Najib M; Gleeson, Fergus V

    2016-02-01

    Patients with an unexplained pleural effusion often require urgent investigation. Clinical practice varies due to uncertainty as to whether an effusion should be drained completely before diagnostic imaging. We performed a retrospective study of patients undergoing medical thoracoscopy for an unexplained effusion. In 110 patients with paired (pre- and post-drainage) chest X-rays and 32 patients with paired computed tomography scans, post-drainage imaging did not provide additional information that would have influenced the clinical decision-making process. © 2015 Asian Pacific Society of Respirology.

  9. Thermal imaging of afterburning plumes

    NASA Astrophysics Data System (ADS)

    Ajdari, E.; Gutmark, E.; Parr, T. P.; Wilson, K. J.; Schadow, K. C.

    1989-01-01

    Afterburning and nonafterburning exhaust plumes were studied experimentally for underexpanded sonic and supersonic conical circular nozzles. The plume structure was visualized using thermal imaging camera and regular photography. IR emission by the plume is mainly dependent on the presence of afterburning. Temperature and reducing power of the exhaust gases, in addition to the nozzle configuration, determine the structure of the plume core, the location where the afterburning is initiated, its size and intensity. Comparison between single shot and average thermal images of the plume show that afterburning is a highly turbulent combustion process.

  10. Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, G. Patrick; Browne, Jolyon

    The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.

  11. Using satellite image-based maps and ground inventory data to estimate the area of the remaining Atlantic forest in the Brazilian state of Santa Catarina

    Treesearch

    Alexander C. Vibrans; Ronald E. McRoberts; Paolo Moser; Adilson L. Nicoletti

    2013-01-01

    Estimation of large area forest attributes, such as area of forest cover, from remote sensing-based maps is challenging because of image processing, logistical, and data acquisition constraints. In addition, techniques for estimating and compensating for misclassification and estimating uncertainty are often unfamiliar. Forest area for the state of Santa Catarina in...

  12. Comments on `Area and power efficient DCT architecture for image compression' by Dhandapani and Ramachandran

    NASA Astrophysics Data System (ADS)

    Cintra, Renato J.; Bayer, Fábio M.

    2017-12-01

    In [Dhandapani and Ramachandran, "Area and power efficient DCT architecture for image compression", EURASIP Journal on Advances in Signal Processing 2014, 2014:180] the authors claim to have introduced an approximation for the discrete cosine transform capable of outperforming several well-known approximations in literature in terms of additive complexity. We could not verify the above results and we offer corrections for their work.

  13. Image resolution enhancement via image restoration using neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Shuangteng; Lu, Yihong

    2011-04-01

    Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.

  14. Plenoptic Ophthalmoscopy: A Novel Imaging Technique.

    PubMed

    Adam, Murtaza K; Aenchbacher, Weston; Kurzweg, Timothy; Hsu, Jason

    2016-11-01

    This prospective retinal imaging case series was designed to establish feasibility of plenoptic ophthalmoscopy (PO), a novel mydriatic fundus imaging technique. A custom variable intensity LED array light source adapter was created for the Lytro Gen1 light-field camera (Lytro, Mountain View, CA). Initial PO testing was performed on a model eye and rabbit fundi. PO image acquisition was then performed on dilated human subjects with a variety of retinal pathology and images were subjected to computational enhancement. The Lytro Gen1 light-field camera with custom LED array captured fundus images of eyes with diabetic retinopathy, age-related macular degeneration, retinal detachment, and other diagnoses. Post-acquisition computational processing allowed for refocusing and perspective shifting of retinal PO images, resulting in improved image quality. The application of PO to image the ocular fundus is feasible. Additional studies are needed to determine its potential clinical utility. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:1038-1043.]. Copyright 2016, SLACK Incorporated.

  15. Frequency division multiplexed multi-color fluorescence microscope system

    NASA Astrophysics Data System (ADS)

    Le, Vu Nam; Yang, Huai Dong; Zhang, Si Chun; Zhang, Xin Rong; Jin, Guo Fan

    2017-10-01

    Grayscale camera can only obtain gray scale image of object, while the multicolor imaging technology can obtain the color information to distinguish the sample structures which have the same shapes but in different colors. In fluorescence microscopy, the current method of multicolor imaging are flawed. Problem of these method is affecting the efficiency of fluorescence imaging, reducing the sampling rate of CCD etc. In this paper, we propose a novel multiple color fluorescence microscopy imaging method which based on the Frequency division multiplexing (FDM) technology, by modulating the excitation lights and demodulating the fluorescence signal in frequency domain. This method uses periodic functions with different frequency to modulate amplitude of each excitation lights, and then combine these beams for illumination in a fluorescence microscopy imaging system. The imaging system will detect a multicolor fluorescence image by a grayscale camera. During the data processing, the signal obtained by each pixel of the camera will be processed with discrete Fourier transform, decomposed by color in the frequency domain and then used inverse discrete Fourier transform. After using this process for signals from all of the pixels, monochrome images of each color on the image plane can be obtained and multicolor image is also acquired. Based on this method, this paper has constructed and set up a two-color fluorescence microscope system with two excitation wavelengths of 488 nm and 639 nm. By using this system to observe the linearly movement of two kinds of fluorescent microspheres, after the data processing, we obtain a two-color fluorescence dynamic video which is consistent with the original image. This experiment shows that the dynamic phenomenon of multicolor fluorescent biological samples can be generally observed by this method. Compared with the current methods, this method can obtain the image signals of each color at the same time, and the color video's frame rate is consistent with the frame rate of the camera. The optical system is simpler and does not need extra color separation element. In addition, this method has a good filtering effect on the ambient light or other light signals which are not affected by the modulation process.

  16. Efficient image acquisition design for a cancer detection system

    NASA Astrophysics Data System (ADS)

    Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2013-09-01

    Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.

  17. Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation

    PubMed Central

    Kazmi, S. M. Shams; Balial, Satyajit; Dunn, Andrew K.

    2014-01-01

    Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956

  18. The relationship between three-dimensional imaging and group decision making: an exploratory study.

    PubMed

    Litynski, D M; Grabowski, M; Wallace, W A

    1997-07-01

    This paper describes an empirical investigation of the effect of three dimensional (3-D) imaging on group performance in a tactical planning task. The objective of the study is to examine the role that stereoscopic imaging can play in supporting face-to-face group problem solving and decision making-in particular, the alternative generation and evaluation processes in teams. It was hypothesized that with the stereoscopic display, group members would better visualize the information concerning the task environment, producing open communication and information exchanges. The experimental setting was a tactical command and control task, and the quality of the decisions and nature of the group decision process were investigated with three treatments: 1) noncomputerized, i.e., topographic maps with depth cues; 2) two-dimensional (2-D) imaging; and 3) stereoscopic imaging. The results were mixed on group performance. However, those groups with the stereoscopic displays generated more alternatives and spent less time on evaluation. In addition, the stereoscopic decision aid did not interfere with the group problem solving and decision-making processes. The paper concludes with a discussion of potential benefits, and the need to resolve demonstrated weaknesses of the technology.

  19. Measuring and imaging diffusion with multiple scan speed image correlation spectroscopy.

    PubMed

    Gröner, Nadine; Capoulade, Jérémie; Cremer, Christoph; Wachsmuth, Malte

    2010-09-27

    The intracellular mobility of biomolecules is determined by transport and diffusion as well as molecular interactions and is crucial for many processes in living cells. Methods of fluorescence microscopy like confocal laser scanning microscopy (CLSM) can be used to characterize the intracellular distribution of fluorescently labeled biomolecules. Fluorescence correlation spectroscopy (FCS) is used to describe diffusion, transport and photo-physical processes quantitatively. As an alternative to FCS, spatially resolved measurements of mobilities can be implemented using a CLSM by utilizing the spatio-temporal information inscribed into the image by the scan process, referred to as raster image correlation spectroscopy (RICS). Here we present and discuss an extended approach, multiple scan speed image correlation spectroscopy (msICS), which benefits from the advantages of RICS, i.e. the use of widely available instrumentation and the extraction of spatially resolved mobility information, without the need of a priori knowledge of diffusion properties. In addition, msICS covers a broad dynamic range, generates correlation data comparable to FCS measurements, and allows to derive two-dimensional maps of diffusion coefficients. We show the applicability of msICS to fluorophores in solution and to free EGFP in living cells.

  20. Quantitative image analysis for evaluating the coating thickness and pore distribution in coated small particles.

    PubMed

    Laksmana, F L; Van Vliet, L J; Hartman Kok, P J A; Vromans, H; Frijlink, H W; Van der Voort Maarschalk, K

    2009-04-01

    This study aims to develop a characterization method for coating structure based on image analysis, which is particularly promising for the rational design of coated particles in the pharmaceutical industry. The method applies the MATLAB image processing toolbox to images of coated particles taken with Confocal Laser Scanning Microscopy (CSLM). The coating thicknesses have been determined along the particle perimeter, from which a statistical analysis could be performed to obtain relevant thickness properties, e.g. the minimum coating thickness and the span of the thickness distribution. The characterization of the pore structure involved a proper segmentation of pores from the coating and a granulometry operation. The presented method facilitates the quantification of porosity, thickness and pore size distribution of a coating. These parameters are considered the important coating properties, which are critical to coating functionality. Additionally, the effect of the coating process variations on coating quality can straight-forwardly be assessed. Enabling a good characterization of the coating qualities, the presented method can be used as a fast and effective tool to predict coating functionality. This approach also enables the influence of different process conditions on coating properties to be effectively monitored, which latterly leads to process tailoring.

  1. ARC-1986-AC86-7008

    NASA Image and Video Library

    1986-01-14

    Range : 12.9 million miles (8.0 million miles) P-29468C This false color Voyager photograph of Uranus shows a discrete cloud seen as a bright streak near the planets limb. The cloud visible here is the most prominent feature seen in a series of Voyager images designed to track atmospheric motions. The occasional donut shaped features, including one at the bottom, are shadows cast by dust on the camera optics. The picture is a highly processed composite of three images. The processing necessary to bring out the faint features on the planet also brings out these camera blemishes. The three seperate images used where shot through violet, blue, and orange filters. Each color image showd the cloud to a different degree; because they were not exposed at the same time , the images were processed to provide a good spatial match. In a true color image, the cloud would be barely discernable; the false color helps to bring out additional details. The different colors imply variations in vertical structure, but as of yet it is not possible to be specific about such differences. One possiblity is that the uranian atmosphere may contain smog like constituents, in which case some color differences may represent differences in how these molecules are distributed.

  2. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies

    NASA Astrophysics Data System (ADS)

    Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad

    2018-06-01

    Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.

  3. From heuristic optimization to dictionary learning: a review and comprehensive comparison of image denoising algorithms.

    PubMed

    Shao, Ling; Yan, Ruomei; Li, Xuelong; Liu, Yan

    2014-07-01

    Image denoising is a well explored topic in the field of image processing. In the past several decades, the progress made in image denoising has benefited from the improved modeling of natural images. In this paper, we introduce a new taxonomy based on image representations for a better understanding of state-of-the-art image denoising techniques. Within each category, several representative algorithms are selected for evaluation and comparison. The experimental results are discussed and analyzed to determine the overall advantages and disadvantages of each category. In general, the nonlocal methods within each category produce better denoising results than local ones. In addition, methods based on overcomplete representations using learned dictionaries perform better than others. The comprehensive study in this paper would serve as a good reference and stimulate new research ideas in image denoising.

  4. Multifocus watermarking approach based on discrete cosine transform.

    PubMed

    Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila

    2016-05-01

    Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.

  5. Thermal Imaging for Assessment of Electron-Beam Free Form Fabrication (EBF(sup 3)) Additive Manufacturing Welds

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Burke, Eric R.; Hafley, Robert A.; Taminger, Karen M.; Domack, Christopher S.; Brewer, Amy R.; Martin, Richard E.

    2013-01-01

    Additive manufacturing is a rapidly growing field where 3-dimensional parts can be produced layer by layer. NASA s electron beam free-form fabrication (EBF(sup 3)) technology is being evaluated to manufacture metallic parts in a space environment. The benefits of EBF(sup 3) technology are weight savings to support space missions, rapid prototyping in a zero gravity environment, and improved vehicle readiness. The EBF(sup 3) system is composed of 3 main components: electron beam gun, multi-axis position system, and metallic wire feeder. The electron beam is used to melt the wire and the multi-axis positioning system is used to build the part layer by layer. To insure a quality weld, a near infrared (NIR) camera is used to image the melt pool and solidification areas. This paper describes the calibration and application of a NIR camera for temperature measurement. In addition, image processing techniques are presented for weld assessment metrics.

  6. In flight image processing on multi-rotor aircraft for autonomous landing

    NASA Astrophysics Data System (ADS)

    Henry, Richard, Jr.

    An estimated $6.4 billion was spent during the year 2013 on developing drone technology around the world and is expected to double in the next decade. However, drone applications typically require strong pilot skills, safety, responsibilities and adherence to regulations during flight. If the flight control process could be safer and more reliable in terms of landing, it would be possible to further develop a wider range of applications. The objective of this research effort is to describe the design and evaluation of a fully autonomous Unmanned Aerial system (UAS), specifically a four rotor aircraft, commonly known as quad copter for precise landing applications. The full landing autonomy is achieved by image processing capabilities during flight for target recognition by employing the open source library OpenCV. In addition, all imaging data is processed by a single embedded computer that estimates a relative position with respect to the target landing pad. Results shows a reduction on the average offset error by 67.88% in comparison to the current return to lunch (RTL) method which only relies on GPS positioning. The present work validates the need for relying on image processing for precise landing applications instead of the inexact method of a commercial low cost GPS dependency.

  7. Imaging spectroscopy links aspen genotype with below-ground processes at landscape scales

    PubMed Central

    Madritch, Michael D.; Kingdon, Clayton C.; Singh, Aditya; Mock, Karen E.; Lindroth, Richard L.; Townsend, Philip A.

    2014-01-01

    Fine-scale biodiversity is increasingly recognized as important to ecosystem-level processes. Remote sensing technologies have great potential to estimate both biodiversity and ecosystem function over large spatial scales. Here, we demonstrate the capacity of imaging spectroscopy to discriminate among genotypes of Populus tremuloides (trembling aspen), one of the most genetically diverse and widespread forest species in North America. We combine imaging spectroscopy (AVIRIS) data with genetic, phytochemical, microbial and biogeochemical data to determine how intraspecific plant genetic variation influences below-ground processes at landscape scales. We demonstrate that both canopy chemistry and below-ground processes vary over large spatial scales (continental) according to aspen genotype. Imaging spectrometer data distinguish aspen genotypes through variation in canopy spectral signature. In addition, foliar spectral variation correlates well with variation in canopy chemistry, especially condensed tannins. Variation in aspen canopy chemistry, in turn, is correlated with variation in below-ground processes. Variation in spectra also correlates well with variation in soil traits. These findings indicate that forest tree species can create spatial mosaics of ecosystem functioning across large spatial scales and that these patterns can be quantified via remote sensing techniques. Moreover, they demonstrate the utility of using optical properties as proxies for fine-scale measurements of biodiversity over large spatial scales. PMID:24733949

  8. Multiplicative noise removal via a learned dictionary.

    PubMed

    Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong

    2012-11-01

    Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.

  9. Symmetrical group theory for mathematical complexity reduction of digital holograms

    NASA Astrophysics Data System (ADS)

    Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.

    2017-10-01

    This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.

  10. Frequency-dependent processing and interpretation (FDPI) of seismic data for identifying, imaging and monitoring fluid-saturated underground reservoirs

    DOEpatents

    Goloshubin, Gennady M.; Korneev, Valeri A.

    2006-11-14

    A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

  11. Frequency-dependent processing and interpretation (FDPI) of seismic data for identifying, imaging and monitoring fluid-saturated underground reservoirs

    DOEpatents

    Goloshubin, Gennady M.; Korneev, Valeri A.

    2005-09-06

    A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

  12. Visual Contrast Enhancement Algorithm Based on Histogram Equalization

    PubMed Central

    Ting, Chih-Chung; Wu, Bing-Fei; Chung, Meng-Liang; Chiu, Chung-Cheng; Wu, Ya-Ching

    2015-01-01

    Image enhancement techniques primarily improve the contrast of an image to lend it a better appearance. One of the popular enhancement methods is histogram equalization (HE) because of its simplicity and effectiveness. However, it is rarely applied to consumer electronics products because it can cause excessive contrast enhancement and feature loss problems. These problems make the images processed by HE look unnatural and introduce unwanted artifacts in them. In this study, a visual contrast enhancement algorithm (VCEA) based on HE is proposed. VCEA considers the requirements of the human visual perception in order to address the drawbacks of HE. It effectively solves the excessive contrast enhancement problem by adjusting the spaces between two adjacent gray values of the HE histogram. In addition, VCEA reduces the effects of the feature loss problem by using the obtained spaces. Furthermore, VCEA enhances the detailed textures of an image to generate an enhanced image with better visual quality. Experimental results show that images obtained by applying VCEA have higher contrast and are more suited to human visual perception than those processed by HE and other HE-based methods. PMID:26184219

  13. Refining enamel thickness measurements from B-mode ultrasound images.

    PubMed

    Hua, Jeremy; Chen, Ssu-Kuang; Kim, Yongmin

    2009-01-01

    Dental erosion has been growing increasingly prevalent with the rise in consumption of heavy starches, sugars, coffee, and acidic beverages. In addition, various disorders, such as Gastroenterological Reflux Disease (GERD), have symptoms of rapid rates of tooth erosion. The measurement of enamel thickness would be important for dentists to assess the progression of enamel loss from all forms of erosion, attrition, and abrasion. Characterizing enamel loss is currently done with various subjective indexes that can be interpreted in different ways by different dentists. Ultrasound has been utilized since the 1960s to determine internal tooth structure, but with mixed results. Via image processing and enhancement, we were able to refine B-mode dental ultrasound images for more accurate enamel thickness measurements. The mean difference between the measured thickness of the occlusal enamel from ultrasound images and corresponding gold standard CT images improved from 0.55 mm to 0.32 mm with image processing (p = 0.033). The difference also improved from 0.62 to 0.53 mm at the buccal/lingual enamel surfaces, but not significantly (p = 0.38).

  14. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  15. New Processing of Spaceborne Imaging Radar-C (SIR-C) Data

    NASA Astrophysics Data System (ADS)

    Meyer, F. J.; Gracheva, V.; Arko, S. A.; Labelle-Hamer, A. L.

    2017-12-01

    The Spaceborne Imaging Radar-C (SIR-C) was a radar system, which successfully operated on two separate shuttle missions in April and October 1994. During these two missions, a total of 143 hours of radar data were recorded. SIR-C was the first multifrequency and polarimetric spaceborne radar system, operating in dual frequency (L- and C- band) and with quad-polarization. SIR-C had a variety of different operating modes, which are innovative even from today's point of view. Depending on the mode, it was possible to acquire data with different polarizations and carrier frequency combinations. Additionally, different swaths and bandwidths could be used during the data collection and it was possible to receive data with two antennas in the along-track direction.The United States Geological Survey (USGS) distributes the synthetic aperture radar (SAR) images as single-look complex (SLC) and multi-look complex (MLC) products. Unfortunately, since June 2005 the SIR-C processor has been inoperable and not repairable. All acquired SLC and MLC images were processed with a course resolution of 100 m with the goal of generating a quick look. These images are however not well suited for scientific analysis. Only a small percentage of the acquired data has been processed as full resolution SAR images and the unprocessed high resolution data cannot be processed any more at the moment.At the Alaska Satellite Facility (ASF) a new processor was developed to process binary SIR-C data to full resolution SAR images. ASF is planning to process the entire recoverable SIR-C archive to full resolution SLCs, MLCs and high resolution geocoded image products. ASF will make these products available to the science community through their existing data archiving and distribution system.The final paper will describe the new processor and analyze the challenges of reprocessing the SIR-C data.

  16. Applications of LANDSAT data to the integrated economic development of Mindoro, Phillipines

    NASA Technical Reports Server (NTRS)

    Wagner, T. W.; Fernandez, J. C.

    1977-01-01

    LANDSAT data is seen as providing essential up-to-date resource information for the planning process. LANDSAT data of Mindoro Island in the Philippines was processed to provide thematic maps showing patterns of agriculture, forest cover, terrain, wetlands and water turbidity. A hybrid approach using both supervised and unsupervised classification techniques resulted in 30 different scene classes which were subsequently color-coded and mapped at a scale of 1:250,000. In addition, intensive image analysis is being carried out in evaluating the images. The images, maps, and aerial statistics are being used to provide data to seven technical departments in planning the economic development of Mindoro. Multispectral aircraft imagery was collected to compliment the application of LANDSAT data and validate the classification results.

  17. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  18. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  19. In vivo terahertz reflection imaging of human scars during and after the healing process.

    PubMed

    Fan, Shuting; Ung, Benjamin S Y; Parrott, Edward P J; Wallace, Vincent P; Pickwell-MacPherson, Emma

    2017-09-01

    We use terahertz imaging to measure four human skin scars in vivo. Clear contrast between the refractive index of the scar and surrounding tissue was observed for all of the scars, despite some being difficult to see with the naked eye. Additionally, we monitored the healing process of a hypertrophic scar. We found that the contrast in the absorption coefficient became less prominent after a few months post-injury, but that the contrast in the refractive index was still significant even months post-injury. Our results demonstrate the capability of terahertz imaging to quantitatively measure subtle changes in skin properties and this may be useful for improving scar treatment and management. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. LCD motion blur reduction: a signal processing approach.

    PubMed

    Har-Noy, Shay; Nguyen, Truong Q

    2008-02-01

    Liquid crystal displays (LCDs) have shown great promise in the consumer market for their use as both computer and television displays. Despite their many advantages, the inherent sample-and-hold nature of LCD image formation results in a phenomenon known as motion blur. In this work, we develop a method for motion blur reduction using the Richardson-Lucy deconvolution algorithm in concert with motion vector information from the scene. We further refine our approach by introducing a perceptual significance metric that allows us to weight the amount of processing performed on different regions in the image. In addition, we analyze the role of motion vector errors in the quality of our resulting image. Perceptual tests indicate that our algorithm reduces the amount of perceivable motion blur in LCDs.

  1. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  2. Flight Results from the HST SM4 Relative Navigation Sensor System

    NASA Technical Reports Server (NTRS)

    Naasz, Bo; Eepoel, John Van; Queen, Steve; Southward, C. Michael; Hannah, Joel

    2010-01-01

    On May 11, 2009, Space Shuttle Atlantis roared off of Launch Pad 39A enroute to the Hubble Space Telescope (HST) to undertake its final servicing of HST, Servicing Mission 4. Onboard Atlantis was a small payload called the Relative Navigation Sensor experiment, which included three cameras of varying focal ranges, avionics to record images and estimate, in real time, the relative position and attitude (aka "pose") of the telescope during rendezvous and deploy. The avionics package, known as SpaceCube and developed at the Goddard Space Flight Center, performed image processing using field programmable gate arrays to accelerate this process, and in addition executed two different pose algorithms in parallel, the Goddard Natural Feature Image Recognition and the ULTOR Passive Pose and Position Engine (P3E) algorithms

  3. A Versatile Mounting Method for Long Term Imaging of Zebrafish Development.

    PubMed

    Hirsinger, Estelle; Steventon, Ben

    2017-01-26

    Zebrafish embryos offer an ideal experimental system to study complex morphogenetic processes due to their ease of accessibility and optical transparency. In particular, posterior body elongation is an essential process in embryonic development by which multiple tissue deformations act together to direct the formation of a large part of the body axis. In order to observe this process by long-term time-lapse imaging it is necessary to utilize a mounting technique that allows sufficient support to maintain samples in the correct orientation during transfer to the microscope and acquisition. In addition, the mounting must also provide sufficient freedom of movement for the outgrowth of the posterior body region without affecting its normal development. Finally, there must be a certain degree in versatility of the mounting method to allow imaging on diverse imaging set-ups. Here, we present a mounting technique for imaging the development of posterior body elongation in the zebrafish D. rerio. This technique involves mounting embryos such that the head and yolk sac regions are almost entirely included in agarose, while leaving out the posterior body region to elongate and develop normally. We will show how this can be adapted for upright, inverted and vertical light-sheet microscopy set-ups. While this protocol focuses on mounting embryos for imaging for the posterior body, it could easily be adapted for the live imaging of multiple aspects of zebrafish development.

  4. Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing

    NASA Technical Reports Server (NTRS)

    Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.

    1995-01-01

    Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.

  5. Exploitation of realistic computational anthropomorphic phantoms for the optimization of nuclear imaging acquisition and processing protocols.

    PubMed

    Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C

    2014-01-01

    Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.

  6. Landsat 8 on-orbit characterization and calibration system

    USGS Publications Warehouse

    Micijevic, Esad; Morfitt, Ron; Choate, Michael J.

    2011-01-01

    The Landsat Data Continuity Mission (LDCM) is planning to launch the Landsat 8 satellite in December 2012, which continues an uninterrupted record of consistently calibrated globally acquired multispectral images of the Earth started in 1972. The satellite will carry two imaging sensors: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). The OLI will provide visible, near-infrared and short-wave infrared data in nine spectral bands while the TIRS will acquire thermal infrared data in two bands. Both sensors have a pushbroom design and consequently, each has a large number of detectors to be characterized. Image and calibration data downlinked from the satellite will be processed by the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center using the Landsat 8 Image Assessment System (IAS), a component of the Ground System. In addition to extracting statistics from all Earth images acquired, the IAS will process and trend results from analysis of special calibration acquisitions, such as solar diffuser, lunar, shutter, night, lamp and blackbody data, and preselected calibration sites. The trended data will be systematically processed and analyzed, and calibration and characterization parameters will be updated using both automatic and customized manual tools. This paper describes the analysis tools and the system developed to monitor and characterize on-orbit performance and calibrate the Landsat 8 sensors and image data products.

  7. Molecular imaging of rheumatoid arthritis: emerging markers, tools, and techniques

    PubMed Central

    2014-01-01

    Early diagnosis and effective monitoring of rheumatoid arthritis (RA) are important for a positive outcome. Instant treatment often results in faster reduction of inflammation and, as a consequence, less structural damage. Anatomical imaging techniques have been in use for a long time, facilitating diagnosis and monitoring of RA. However, mere imaging of anatomical structures provides little information on the processes preceding changes in synovial tissue, cartilage, and bone. Molecular imaging might facilitate more effective diagnosis and monitoring in addition to providing new information on the disease pathogenesis. A limiting factor in the development of new molecular imaging techniques is the availability of suitable probes. Here, we review which cells and molecules can be targeted in the RA joint and discuss the advances that have been made in imaging of arthritis with a focus on such molecular targets as folate receptor, F4/80, macrophage mannose receptor, E-selectin, intercellular adhesion molecule-1, phosphatidylserine, and matrix metalloproteinases. In addition, we discuss a new tool that is being introduced in the field, namely the use of nanobodies as tracers. Finally, we describe additional molecules displaying specific features in joint inflammation and propose these as potential new molecular imaging targets, more specifically receptor activator of nuclear factor κB and its ligand, chemokine receptors, vascular cell adhesion molecule-1, αVβ3 integrin, P2X7 receptor, suppression of tumorigenicity 2, dendritic cell-specific transmembrane protein, and osteoclast-stimulatory transmembrane protein. PMID:25099015

  8. Operative simulation of anterior clinoidectomy using a rapid prototyping model molded by a three-dimensional printer.

    PubMed

    Okonogi, Shinichi; Kondo, Kosuke; Harada, Naoyuki; Masuda, Hiroyuki; Nemoto, Masaaki; Sugo, Nobuo

    2017-09-01

    As the anatomical three-dimensional (3D) positional relationship around the anterior clinoid process (ACP) is complex, experience of many surgeries is necessary to understand anterior clinoidectomy (AC). We prepared a 3D synthetic image from computed tomographic angiography (CTA) and magnetic resonance imaging (MRI) data and a rapid prototyping (RP) model from the imaging data using a 3D printer. The objective of this study was to evaluate anatomical reproduction of the 3D synthetic image and intraosseous region after AC in the RP model. In addition, the usefulness of the RP model for operative simulation was investigated. The subjects were 51 patients who were examined by CTA and MRI before surgery. The size of the ACP, thickness and length of the optic nerve and artery, and intraosseous length after AC were measured in the 3D synthetic image and RP model, and reproducibility in the RP model was evaluated. In addition, 10 neurosurgeons performed AC in the completed RP models to investigate their usefulness for operative simulation. The RP model reproduced the region in the vicinity of the ACP in the 3D synthetic image, including the intraosseous region, at a high accuracy. In addition, drilling of the RP model was a useful operative simulation method of AC. The RP model of the vicinity of ACP, prepared using a 3D printer, showed favorable anatomical reproducibility, including reproduction of the intraosseous region. In addition, it was concluded that this RP model is useful as a surgical education tool for drilling.

  9. Automatic Coregistration and orthorectification (ACRO) and subsequent mosaicing of NASA high-resolution imagery over the Mars MC11 quadrangle, using HRSC as a baseline

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian

    2018-02-01

    This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.

  10. Modeling Patient-Specific Deformable Mitral Valves.

    PubMed

    Ginty, Olivia; Moore, John; Peters, Terry; Bainbridge, Daniel

    2018-06-01

    Medical imaging has advanced enormously over the last few decades, revolutionizing patient diagnostics and care. At the same time, additive manufacturing has emerged as a means of reproducing physical shapes and models previously not possible. In combination, they have given rise to 3-dimensional (3D) modeling, an entirely new technology for physicians. In an era in which 3D imaging has become a standard for aiding in the diagnosis and treatment of cardiac disease, this visualization now can be taken further by bringing the patient's anatomy into physical reality as a model. The authors describe the generalized process of creating a model of cardiac anatomy from patient images and their experience creating patient-specific dynamic mitral valve models. This involves a combination of image processing software and 3D printing technology. In this article, the complexity of 3D modeling is described and the decision-making process for cardiac anesthesiologists is summarized. The management of cardiac disease has been altered with the emergence of 3D echocardiography, and 3D modeling represents the next paradigm shift. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Matrix phased array (MPA) imaging technology for resistance spot welds

    NASA Astrophysics Data System (ADS)

    Na, Jeong K.; Gleeson, Sean T.

    2014-02-01

    A three-dimensional MPA probe has been incorporated with a high speed phased array electronic board to visualize nugget images of resistance spot welds. The primary application area of this battery operated portable MPA ultrasonic imaging system is in the automotive industry which a conventional destructive testing process is commonly adopted to check the quality of resistance spot welds in auto bodies. Considering an average of five-thousand spot welds in a medium size passenger vehicle, the amount of time and effort given to popping the welds and measuring nugget size are immeasurable in addition to the millions of dollars' worth of scrap metals recycled per plant per year. This wasteful labor intensive destructive testing process has become less reliable as auto body sheet metal has transitioned from thick and heavy mild steels to thin and light high strength steels. Consequently, the necessity of developing a non-destructive inspection methodology has become inevitable. In this paper, the fundamental aspects of the current 3-D probe design, data acquisition algorithms, and weld nugget imaging process are discussed.

  12. Programmability in AIPS++

    NASA Technical Reports Server (NTRS)

    Hjellming, R. M.

    1992-01-01

    AIPS++ is an Astronomical Information Processing System being designed and implemented by an international consortium of NRAO and six other radio astronomy institutions in Australia, India, the Netherlands, the United Kingdom, Canada, and the USA. AIPS++ is intended to replace the functionality of AIPS, to be more easily programmable, and will be implemented in C++ using object-oriented techniques. Programmability in AIPS++ is planned at three levels. The first level will be that of a command-line interpreter with characteristics similar to IDL and PV-Wave, but with an intensive set of operations appropriate to telescope data handling, image formation, and image processing. The third level will be in C++ with extensive use of class libraries for both basic operations and advanced applications. The third level will allow input and output of data between external FORTRAN programs and AIPS++ telescope and image databases. In addition to summarizing the above programmability characteristics, this talk will given an overview of the classes currently being designed for telescope data calibration and editing, image formation, and the 'toolkit' of mathematical 'objects' that will perform most of the processing in AIPS++.

  13. Matrix phased array (MPA) imaging technology for resistance spot welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Jeong K.; Gleeson, Sean T.

    2014-02-18

    A three-dimensional MPA probe has been incorporated with a high speed phased array electronic board to visualize nugget images of resistance spot welds. The primary application area of this battery operated portable MPA ultrasonic imaging system is in the automotive industry which a conventional destructive testing process is commonly adopted to check the quality of resistance spot welds in auto bodies. Considering an average of five-thousand spot welds in a medium size passenger vehicle, the amount of time and effort given to popping the welds and measuring nugget size are immeasurable in addition to the millions of dollars' worth ofmore » scrap metals recycled per plant per year. This wasteful labor intensive destructive testing process has become less reliable as auto body sheet metal has transitioned from thick and heavy mild steels to thin and light high strength steels. Consequently, the necessity of developing a non-destructive inspection methodology has become inevitable. In this paper, the fundamental aspects of the current 3-D probe design, data acquisition algorithms, and weld nugget imaging process are discussed.« less

  14. A novel pre-processing technique for improving image quality in digital breast tomosynthesis.

    PubMed

    Kim, Hyeongseok; Lee, Taewon; Hong, Joonpyo; Sabir, Sohail; Lee, Jung-Ryun; Choi, Young Wook; Kim, Hak Hee; Chae, Eun Young; Cho, Seungryong

    2017-02-01

    Nonlinear pre-reconstruction processing of the projection data in computed tomography (CT) where accurate recovery of the CT numbers is important for diagnosis is usually discouraged, for such a processing would violate the physics of image formation in CT. However, one can devise a pre-processing step to enhance detectability of lesions in digital breast tomosynthesis (DBT) where accurate recovery of the CT numbers is fundamentally impossible due to the incompleteness of the scanned data. Since the detection of lesions such as micro-calcifications and mass in breasts is the purpose of using DBT, it is justified that a technique producing higher detectability of lesions is a virtue. A histogram modification technique was developed in the projection data domain. Histogram of raw projection data was first divided into two parts: One for the breast projection data and the other for background. Background pixel values were set to a single value that represents the boundary between breast and background. After that, both histogram parts were shifted by an appropriate amount of offset and the histogram-modified projection data were log-transformed. Filtered-backprojection (FBP) algorithm was used for image reconstruction of DBT. To evaluate performance of the proposed method, we computed the detectability index for the reconstructed images from clinically acquired data. Typical breast border enhancement artifacts were greatly suppressed and the detectability of calcifications and masses was increased by use of the proposed method. Compared to a global threshold-based post-reconstruction processing technique, the proposed method produced images of higher contrast without invoking additional image artifacts. In this work, we report a novel pre-processing technique that improves detectability of lesions in DBT and has potential advantages over the global threshold-based post-reconstruction processing technique. The proposed method not only increased the lesion detectability but also reduced typical image artifacts pronounced in conventional FBP-based DBT. © 2016 American Association of Physicists in Medicine.

  15. The highs and lows of object impossibility: effects of spatial frequency on holistic processing of impossible objects.

    PubMed

    Freud, Erez; Avidan, Galia; Ganel, Tzvi

    2015-02-01

    Holistic processing, the decoding of a stimulus as a unified whole, is a basic characteristic of object perception. Recent research using Garner's speeded classification task has shown that this processing style is utilized even for impossible objects that contain an inherent spatial ambiguity. In particular, similar Garner interference effects were found for possible and impossible objects, indicating similar holistic processing styles for the two object categories. In the present study, we further investigated the perceptual mechanisms that mediate such holistic representation of impossible objects. We relied on the notion that, whereas information embedded in the high-spatial-frequency (HSF) content supports fine-detailed processing of object features, the information conveyed by low spatial frequencies (LSF) is more crucial for the emergence of a holistic shape representation. To test the effects of image frequency on the holistic processing of impossible objects, participants performed the Garner speeded classification task on images of possible and impossible cubes filtered for their LSF and HSF information. For images containing only LSF, similar interference effects were observed for possible and impossible objects, indicating that the two object categories were processed in a holistic manner. In contrast, for the HSF images, Garner interference was obtained only for possible, but not for impossible objects. Importantly, we provided evidence to show that this effect could not be attributed to a lack of sensitivity to object possibility in the LSF images. Particularly, even for full-spectrum images, Garner interference was still observed for both possible and impossible objects. Additionally, performance in an object classification task revealed high sensitivity to object possibility, even for LSF images. Taken together, these findings suggest that the visual system can tolerate the spatial ambiguity typical to impossible objects by relying on information embedded in LSF, whereas HSF information may underlie the visual system's susceptibility to distortions in objects' spatial layouts.

  16. Volta phase plate data collection facilitates image processing and cryo-EM structure determination.

    PubMed

    von Loeffelholz, Ottilie; Papai, Gabor; Danev, Radostin; Myasnikov, Alexander G; Natchiar, S Kundhavai; Hazemann, Isabelle; Ménétret, Jean-François; Klaholz, Bruno P

    2018-06-01

    A current bottleneck in structure determination of macromolecular complexes by cryo electron microscopy (cryo-EM) is the large amount of data needed to obtain high-resolution 3D reconstructions, including through sorting into different conformations and compositions with advanced image processing. Additionally, it may be difficult to visualize small ligands that bind in sub-stoichiometric levels. Volta phase plates (VPP) introduce a phase shift in the contrast transfer and drastically increase the contrast of the recorded low-dose cryo-EM images while preserving high frequency information. Here we present a comparative study to address the behavior of different data sets during image processing and quantify important parameters during structure refinement. The automated data collection was done from the same human ribosome sample either as a conventional defocus range dataset or with a Volta phase plate close to focus (cfVPP) or with a small defocus (dfVPP). The analysis of image processing parameters shows that dfVPP data behave more robustly during cryo-EM structure refinement because particle alignments, Euler angle assignments and 2D & 3D classifications behave more stably and converge faster. In particular, less particle images are required to reach the same resolution in the 3D reconstructions. Finally, we find that defocus range data collection is also applicable to VPP. This study shows that data processing and cryo-EM map interpretation, including atomic model refinement, are facilitated significantly by performing VPP cryo-EM, which will have an important impact on structural biology. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Noise-gating to Clean Astrophysical Image Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeForest, C. E.

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to nomore » apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.« less

  18. Noise-gating to Clean Astrophysical Image Data

    NASA Astrophysics Data System (ADS)

    DeForest, C. E.

    2017-04-01

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to no apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.

  19. What do you think of my picture? Investigating factors of influence in profile images context perception

    NASA Astrophysics Data System (ADS)

    Mazza, F.; Da Silva, M. P.; Le Callet, P.; Heynderickx, I. E. J.

    2015-03-01

    Multimedia quality assessment has been an important research topic during the last decades. The original focus on artifact visibility has been extended during the years to aspects as image aesthetics, interestingness and memorability. More recently, Fedorovskaya proposed the concept of 'image psychology': this concept focuses on additional quality dimensions related to human content processing. While these additional dimensions are very valuable in understanding preferences, it is very hard to define, isolate and measure their effect on quality. In this paper we continue our research on face pictures investigating which image factors influence context perception. We collected perceived fit of a set of images to various content categories. These categories were selected based on current typologies in social networks. Logistic regression was adopted to model category fit based on images features. In this model we used both low level and high level features, the latter focusing on complex features related to image content. In order to extract these high level features, we relied on crowdsourcing, since computer vision algorithms are not yet sufficiently accurate for the features we needed. Our results underline the importance of some high level content features, e.g. the dress of the portrayed person and scene setting, in categorizing image.

  20. Automatic extraction of nuclei centroids of mouse embryonic cells from fluorescence microscopy images.

    PubMed

    Bashar, Md Khayrul; Komatsu, Koji; Fujimori, Toshihiko; Kobayashi, Tetsuya J

    2012-01-01

    Accurate identification of cell nuclei and their tracking using three dimensional (3D) microscopic images is a demanding task in many biological studies. Manual identification of nuclei centroids from images is an error-prone task, sometimes impossible to accomplish due to low contrast and the presence of noise. Nonetheless, only a few methods are available for 3D bioimaging applications, which sharply contrast with 2D analysis, where many methods already exist. In addition, most methods essentially adopt segmentation for which a reliable solution is still unknown, especially for 3D bio-images having juxtaposed cells. In this work, we propose a new method that can directly extract nuclei centroids from fluorescence microscopy images. This method involves three steps: (i) Pre-processing, (ii) Local enhancement, and (iii) Centroid extraction. The first step includes two variations: first variation (Variant-1) uses the whole 3D pre-processed image, whereas the second one (Variant-2) modifies the preprocessed image to the candidate regions or the candidate hybrid image for further processing. At the second step, a multiscale cube filtering is employed in order to locally enhance the pre-processed image. Centroid extraction in the third step consists of three stages. In Stage-1, we compute a local characteristic ratio at every voxel and extract local maxima regions as candidate centroids using a ratio threshold. Stage-2 processing removes spurious centroids from Stage-1 results by analyzing shapes of intensity profiles from the enhanced image. An iterative procedure based on the nearest neighborhood principle is then proposed to combine if there are fragmented nuclei. Both qualitative and quantitative analyses on a set of 100 images of 3D mouse embryo are performed. Investigations reveal a promising achievement of the technique presented in terms of average sensitivity and precision (i.e., 88.04% and 91.30% for Variant-1; 86.19% and 95.00% for Variant-2), when compared with an existing method (86.06% and 90.11%), originally developed for analyzing C. elegans images.

  1. Improved blood velocity measurements with a hybrid image filtering and iterative Radon transform algorithm

    PubMed Central

    Chhatbar, Pratik Y.; Kara, Prakash

    2013-01-01

    Neural activity leads to hemodynamic changes which can be detected by functional magnetic resonance imaging (fMRI). The determination of blood flow changes in individual vessels is an important aspect of understanding these hemodynamic signals. Blood flow can be calculated from the measurements of vessel diameter and blood velocity. When using line-scan imaging, the movement of blood in the vessel leads to streaks in space-time images, where streak angle is a function of the blood velocity. A variety of methods have been proposed to determine blood velocity from such space-time image sequences. Of these, the Radon transform is relatively easy to implement and has fast data processing. However, the precision of the velocity measurements is dependent on the number of Radon transforms performed, which creates a trade-off between the processing speed and measurement precision. In addition, factors like image contrast, imaging depth, image acquisition speed, and movement artifacts especially in large mammals, can potentially lead to data acquisition that results in erroneous velocity measurements. Here we show that pre-processing the data with a Sobel filter and iterative application of Radon transforms address these issues and provide more accurate blood velocity measurements. Improved signal quality of the image as a result of Sobel filtering increases the accuracy and the iterative Radon transform offers both increased precision and an order of magnitude faster implementation of velocity measurements. This algorithm does not use a priori knowledge of angle information and therefore is sensitive to sudden changes in blood flow. It can be applied on any set of space-time images with red blood cell (RBC) streaks, commonly acquired through line-scan imaging or reconstructed from full-frame, time-lapse images of the vasculature. PMID:23807877

  2. Analyses of requirements for computer control and data processing experiment subsystems: Image data processing system (IDAPS) software description (7094 version), volume 2

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.

  3. Architecture and data processing alternatives for Tse computer. Volume 1: Tse logic design concepts and the development of image processing machine architectures

    NASA Technical Reports Server (NTRS)

    Rickard, D. A.; Bodenheimer, R. E.

    1976-01-01

    Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.

  4. iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM

    PubMed Central

    Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.

    2011-01-01

    iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445

  5. Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach

    NASA Astrophysics Data System (ADS)

    Jazaeri, Amin

    High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.

  6. Pine Island Glacier, Antarctica, MISR Multi-angle Composite

    Atmospheric Science Data Center

    2013-12-17

    ...     View Larger Image (JPEG) A large iceberg has finally separated from the calving front ... next due to stereo parallax. This parallax is used in MISR processing to retrieve cloud heights over snow and ice. Additionally, a plume ...

  7. PANDA: a pipeline toolbox for analyzing brain diffusion images.

    PubMed

    Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang

    2013-01-01

    Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named "Pipeline for Analyzing braiN Diffusion imAges" (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.

  8. Bright field segmentation tomography (BFST) for use as surface identification in stereomicroscopy

    NASA Astrophysics Data System (ADS)

    Thiesse, Jacqueline R.; Namati, Eman; de Ryk, Jessica; Hoffman, Eric A.; McLennan, Geoffrey

    2004-07-01

    Stereomicroscopy is an important method for use in image acquisition because it provides a 3D image of an object when other microscopic techniques can only provide the image in 2D. One challenge that is being faced with this type of imaging is determining the top surface of a sample that has otherwise indistinguishable surface and planar characteristics. We have developed a system that creates oblique illumination and in conjunction with image processing, the top surface can be viewed. The BFST consists of the Leica MZ12 stereomicroscope with a unique attached lighting source. The lighting source consists of eight light emitting diodes (LED's) that are separated by 45-degree angles. Each LED in this system illuminates with a 20-degree viewing angle once per cycle with a shadow over the rest of the sample. Subsequently, eight segmented images are taken per cycle. After the images are captured they are stacked through image addition to achieve the full field of view, and the surface is then easily identified. Image processing techniques, such as skeletonization can be used for further enhancement and measurement. With the use of BFST, advances can be made in detecting surface features from metals to tissue samples, such as in the analytical assessment of pulmonary emphysema using the technique of mean linear intercept.

  9. Coastline detection with time series of SAR images

    NASA Astrophysics Data System (ADS)

    Ao, Dongyang; Dumitru, Octavian; Schwarz, Gottfried; Datcu, Mihai

    2017-10-01

    For maritime remote sensing, coastline detection is a vital task. With continuous coastline detection results from satellite image time series, the actual shoreline, the sea level, and environmental parameters can be observed to support coastal management and disaster warning. Established coastline detection methods are often based on SAR images and wellknown image processing approaches. These methods involve a lot of complicated data processing, which is a big challenge for remote sensing time series. Additionally, a number of SAR satellites operating with polarimetric capabilities have been launched in recent years, and many investigations of target characteristics in radar polarization have been performed. In this paper, a fast and efficient coastline detection method is proposed which comprises three steps. First, we calculate a modified correlation coefficient of two SAR images of different polarization. This coefficient differs from the traditional computation where normalization is needed. Through this modified approach, the separation between sea and land becomes more prominent. Second, we set a histogram-based threshold to distinguish between sea and land within the given image. The histogram is derived from the statistical distribution of the polarized SAR image pixel amplitudes. Third, we extract continuous coastlines using a Canny image edge detector that is rather immune to speckle noise. Finally, the individual coastlines derived from time series of .SAR images can be checked for changes.

  10. Implementation and evaluation of a new workflow for registration and segmentation of pulmonary MRI data for regional lung perfusion assessment.

    PubMed

    Böttger, T; Grunewald, K; Schöbinger, M; Fink, C; Risse, F; Kauczor, H U; Meinzer, H P; Wolf, Ivo

    2007-03-07

    Recently it has been shown that regional lung perfusion can be assessed using time-resolved contrast-enhanced magnetic resonance (MR) imaging. Quantification of the perfusion images has been attempted, based on definition of small regions of interest (ROIs). Use of complete lung segmentations instead of ROIs could possibly increase quantification accuracy. Due to the low signal-to-noise ratio, automatic segmentation algorithms cannot be applied. On the other hand, manual segmentation of the lung tissue is very time consuming and can become inaccurate, as the borders of the lung to adjacent tissues are not always clearly visible. We propose a new workflow for semi-automatic segmentation of the lung from additionally acquired morphological HASTE MR images. First the lung is delineated semi-automatically in the HASTE image. Next the HASTE image is automatically registered with the perfusion images. Finally, the transformation resulting from the registration is used to align the lung segmentation from the morphological dataset with the perfusion images. We evaluated rigid, affine and locally elastic transformations, suitable optimizers and different implementations of mutual information (MI) metrics to determine the best possible registration algorithm. We located the shortcomings of the registration procedure and under which conditions automatic registration will succeed or fail. Segmentation results were evaluated using overlap and distance measures. Integration of the new workflow reduces the time needed for post-processing of the data, simplifies the perfusion quantification and reduces interobserver variability in the segmentation process. In addition, the matched morphological data set can be used to identify morphologic changes as the source for the perfusion abnormalities.

  11. funcLAB/G-service-oriented architecture for standards-based analysis of functional magnetic resonance imaging in HealthGrids.

    PubMed

    Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D

    2007-01-01

    Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.

  12. Imaging the eye fundus with real-time en-face spectral domain optical coherence tomography

    PubMed Central

    Bradu, Adrian; Podoleanu, Adrian Gh.

    2014-01-01

    Real-time display of processed en-face spectral domain optical coherence tomography (SD-OCT) images is important for diagnosis. However, due to many steps of data processing requirements, such as Fast Fourier transformation (FFT), data re-sampling, spectral shaping, apodization, zero padding, followed by software cut of the 3D volume acquired to produce an en-face slice, conventional high-speed SD-OCT cannot render an en-face OCT image in real time. Recently we demonstrated a Master/Slave (MS)-OCT method that is highly parallelizable, as it provides reflectivity values of points at depth within an A-scan in parallel. This allows direct production of en-face images. In addition, the MS-OCT method does not require data linearization, which further simplifies the processing. The computation in our previous paper was however time consuming. In this paper we present an optimized algorithm that can be used to provide en-face MS-OCT images much quicker. Using such an algorithm we demonstrate around 10 times faster production of sets of en-face OCT images than previously obtained as well as simultaneous real-time display of up to 4 en-face OCT images of 200 × 200 pixels2 from the fovea and the optic nerve of a volunteer. We also demonstrate 3D and B-scan OCT images obtained from sets of MS-OCT C-scans, i.e. with no FFT and no intermediate step of generation of A-scans. PMID:24761303

  13. An Imaging System for Automated Characteristic Length Measurement of Debrisat Fragments

    NASA Technical Reports Server (NTRS)

    Moraguez, Mathew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Sorge, Marlon; Cowardin, Heather; Opiela, John; Krisko, Paula H.

    2015-01-01

    The debris fragments generated by DebriSat's hypervelocity impact test are currently being processed and characterized through an effort of NASA and USAF. The debris characteristics will be used to update satellite breakup models. In particular, the physical dimensions of the debris fragments must be measured to provide characteristic lengths for use in these models. Calipers and commercial 3D scanners were considered as measurement options, but an automated imaging system was ultimately developed to measure debris fragments. By automating the entire process, the measurement results are made repeatable and the human factor associated with calipers and 3D scanning is eliminated. Unlike using calipers to measure, the imaging system obtains non-contact measurements to avoid damaging delicate fragments. Furthermore, this fully automated measurement system minimizes fragment handling, which reduces the potential for fragment damage during the characterization process. In addition, the imaging system reduces the time required to determine the characteristic length of the debris fragment. In this way, the imaging system can measure the tens of thousands of DebriSat fragments at a rate of about six minutes per fragment, compared to hours per fragment in NASA's current 3D scanning measurement approach. The imaging system utilizes a space carving algorithm to generate a 3D point cloud of the article being measured and a custom developed algorithm then extracts the characteristic length from the point cloud. This paper describes the measurement process, results, challenges, and future work of the imaging system used for automated characteristic length measurement of DebriSat fragments.

  14. Multiscale image fusion using the undecimated wavelet transform with spectral factorization and nonorthogonal filter banks.

    PubMed

    Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B

    2013-03-01

    Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.

  15. Integrated analysis of remote sensing products from basic geological surveys. [Brazil

    NASA Technical Reports Server (NTRS)

    Dasilvafagundesfilho, E. (Principal Investigator)

    1984-01-01

    Recent advances in remote sensing led to the development of several techniques to obtain image information. These techniques as effective tools in geological maping are analyzed. A strategy for optimizing the images in basic geological surveying is presented. It embraces as integrated analysis of spatial, spectral, and temporal data through photoptic (color additive viewer) and computer processing at different scales, allowing large areas survey in a fast, precise, and low cost manner.

  16. Book Review: Reiner Salzer and Heinz W. Siesler (Eds.): Infrared and Raman spectroscopic imaging, 2nd ed.

    DOE PAGES

    Moore, David Steven

    2015-05-10

    This second edition of "Infrared and Raman Spectroscopic Imaging" propels practitioners in that wide-ranging field, as well as other readers, to the current state of the art in a well-produced and full-color, completely revised and updated, volume. This new edition chronicles the expanded application of vibrational spectroscopic imaging from yesterday's time-consuming point-by-point buildup of a hyperspectral image cube, through the improvements afforded by the addition of focal plane arrays and line scan imaging, to methods applicable beyond the diffraction limit, instructs the reader on the improved instrumentation and image and data analysis methods, and expounds on their application to fundamentalmore » biomedical knowledge, food and agricultural surveys, materials science, process and quality control, and many others.« less

  17. In vivo multiphoton imaging of bile duct ligation

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Li, Feng-Chieh; Chen, Hsiao-Chin; Chang, Po-shou; Yang, Shu-Mei; Lee, Hsuan-Shu; Dong, Chen-Yuan

    2008-02-01

    Bile is the exocrine secretion of liver and synthesized by hepatocytes. It is drained into duodenum for the function of digestion or drained into gallbladder for of storage. Bile duct obstruction is a blockage in the tubes that carry bile to the gallbladder and small intestine. However, Bile duct ligation results in the changes of bile acids in serum, liver, urine, and feces1, 2. In this work, we demonstrate a novel technique to image this pathological condition by using a newly developed in vivo imaging system, which includes multiphoton microscopy and intravital hepatic imaging chamber. The images we acquired demonstrate the uptake, processing of 6-CFDA in hepatocytes and excretion of CF in the bile canaliculi. In addition to imaging, we can also measure kinetics of the green fluorescence intensity.

  18. Visual processing in Alzheimer's disease: surface detail and colour fail to aid object identification.

    PubMed

    Adlington, Rebecca L; Laws, Keith R; Gale, Tim M

    2009-10-01

    It has been suggested that object recognition in patients with Alzheimer's disease (AD) may be strongly influenced both by image format (e.g. colour vs. line-drawn) and by low-level visual impairments. To examine these notions, we tested basic visual functioning and picture naming in 41 AD patients and 40 healthy elderly controls. Picture naming was examined using 105 images representing a wide range of living and nonliving subcategories (from the Hatfield image test [HIT]: [Adlington, R. A., Laws, K. R., & Gale, T. M. (in press). The Hatfield image test (HIT): A new picture test and norms for experimental and clinical use. Journal of Clinical and Experimental Neuropsychology]), with each item presented in colour, greyscale, or line-drawn formats. Whilst naming for elderly controls improved linearly with the addition of surface detail and colour, AD patients showed no benefit from the addition of either surface information or colour. Additionally, controls showed a significant category by format interaction; however, the same profile did not emerge for AD patients. Finally, AD patients showed widespread and significant impairment on tasks of visual functioning, and low-level visual impairment was predictive of patient naming.

  19. Landsat-7 Enhanced Thematic Mapper plus radiometric calibration

    USGS Publications Warehouse

    Markham, B.L.; Boncyk, Wayne C.; Helder, D.L.; Barker, J.L.

    1997-01-01

    Landsat-7 is currently being built and tested for launch in 1998. The Enhanced Thematic Mapper Plus (ETM+) sensor for Landsat-7, a derivative of the highly successful Thematic Mapper (TM) sensors on Landsats 4 and 5, and the Landsat-7 ground system are being built to provide enhanced radiometric calibration performance. In addition, regular vicarious calibration campaigns are being planned to provide additional information for calibration of the ETM+ instrument. The primary upgrades to the instrument include the addition of two solar calibrators: the full aperture solar calibrator, a deployable diffuser, and the partial aperture solar calibrator, a passive device that allows the ETM+ to image the sun. The ground processing incorporates for the first time an off-line facility, the Image Assessment System (IAS), to perform calibration, evaluation and analysis. Within the IAS, processing capabilities include radiometric artifact characterization and correction, radiometric calibration from the multiple calibrator sources, inclusion of results from vicarious calibration and statistical trending of calibration data to improve calibration estimation. The Landsat Product Generation System, the portion of the ground system responsible for producing calibrated products, will incorporate the radiometric artifact correction algorithms and will use the calibration information generated by the IAS. This calibration information will also be supplied to ground processing systems throughout the world.

  20. Distance preservation in color image transforms

    NASA Astrophysics Data System (ADS)

    Santini, Simone

    1999-12-01

    Most current image processing systems work on color images, and color is a precious perceptual clue for determining image similarity. Working with color images, however, is not the sam thing as working with images taking values in a 3D Euclidean space. Not only are color spaces bounded, but the characteristics of the observer endow the space with a 'perceptual' metric that in general does not correspond to the metric naturally inherited from R3. This paper studies the problem of filtering color images abstractly. It begins by determining the properties of the color sum and color product operations such that he desirable properties of orthonormal bases will be preserved. The paper then defines a general scheme, based on the action of the additive group on the color space, by which operations that satisfy the required properties can be defined.

  1. Image analysis of corrosion pit initiation on ASTM type A240 stainless steel and ASTM type A 1008 carbon steel

    NASA Astrophysics Data System (ADS)

    Nine, H. M. Zulker

    The adversity of metallic corrosion is of growing concern to industrial engineers and scientists. Corrosion attacks metal surface and causes structural as well as direct and indirect economic losses. Multiple corrosion monitoring tools are available although those are time-consuming and costly. Due to the availability of image capturing devices in today's world, image based corrosion control technique is a unique innovation. By setting up stainless steel SS 304 and low carbon steel QD 1008 panels in distilled water, half-saturated sodium chloride and saturated sodium chloride solutions and subsequent RGB image analysis in Matlab, in this research, a simple and cost-effective corrosion measurement tool has identified and investigated. Additionally, the open circuit potential and electrochemical impedance spectroscopy results have been compared with RGB analysis to gratify the corrosion. Additionally, to understand the importance of ambiguity in crisis communication, the communication process between Union Carbide and Indian Government regarding the Bhopal incident in 1984 was analyzed.

  2. The 2-d CCD Data Reduction Cookbook

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Privett, G. J.; Taylor, M. B.

    This cookbook presents simple recipes and scripts for reducing direct images acquired with optical CCD detectors. Using these recipes and scripts you can correct un-processed images obtained from CCDs for various instrumental effects to retrieve an accurate picture of the field of sky observed. The recipes and scripts use standard software available at all Starlink sites. The topics covered include: creating and applying bias and flat-field corrections, registering frames and creating a stack or mosaic of registered frames. Related auxiliary tasks, such as converting between different data formats, displaying images and calculating image statistics are also presented. In addition to the recipes and scripts, sufficient background material is presented to explain the procedures and techniques used. The treatment is deliberately practical rather than theoretical, in keeping with the aim of providing advice on the actual reduction of observations. Additional material outlines some of the differences between using conventional optical CCDs and the similar arrays used to observe at infrared wavelengths.

  3. Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration

    NASA Astrophysics Data System (ADS)

    Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola

    In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.

  4. Ultramap: the all in One Photogrammetric Solution

    NASA Astrophysics Data System (ADS)

    Wiechert, A.; Gruber, M.; Karner, K.

    2012-07-01

    This paper describes in detail the dense matcher developed since years by Vexcel Imaging in Graz for Microsoft's Bing Maps project. This dense matcher was exclusively developed for and used by Microsoft for the production of the 3D city models of Virtual Earth. It will now be made available to the public with the UltraMap software release mid-2012. That represents a revolutionary step in digital photogrammetry. The dense matcher generates digital surface models (DSM) and digital terrain models (DTM) automatically out of a set of overlapping UltraCam images. The models have an outstanding point density of several hundred points per square meter and sub-pixel accuracy and are generated automatically. The dense matcher consists of two steps. The first step rectifies overlapping image areas to speed up the dense image matching process. This rectification step ensures a very efficient processing and detects occluded areas by applying a back-matching step. In this dense image matching process a cost function consisting of a matching score as well as a smoothness term is minimized. In the second step the resulting range image patches are fused into a DSM by optimizing a global cost function. The whole process is optimized for multi-core CPUs and optionally uses GPUs if available. UltraMap 3.0 features also an additional step which is presented in this paper, a complete automated true-ortho and ortho workflow. For this, the UltraCam images are combined with the DSM or DTM in an automated rectification step and that results in high quality true-ortho or ortho images as a result of a highly automated workflow. The paper presents the new workflow and first results.

  5. A new stationary gridline artifact suppression method based on the 2D discrete wavelet transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Hui, E-mail: corinna@seu.edu.cn; Key Laboratory of Computer Network and Information Integration; Centre de Recherche en Information Biomédicale sino-français, Laboratoire International Associé, Inserm, Université de Rennes 1, Rennes 35000

    2015-04-15

    Purpose: In digital x-ray radiography, an antiscatter grid is inserted between the patient and the image receptor to reduce scattered radiation. If the antiscatter grid is used in a stationary way, gridline artifacts will appear in the final image. In most of the gridline removal image processing methods, the useful information with spatial frequencies close to that of the gridline is usually lost or degraded. In this study, a new stationary gridline suppression method is designed to preserve more of the useful information. Methods: The method is as follows. The input image is first recursively decomposed into several smaller subimagesmore » using a multiscale 2D discrete wavelet transform. The decomposition process stops when the gridline signal is found to be greater than a threshold in one or several of these subimages using a gridline detection module. An automatic Gaussian band-stop filter is then applied to the detected subimages to remove the gridline signal. Finally, the restored image is achieved using the corresponding 2D inverse discrete wavelet transform. Results: The processed images show that the proposed method can remove the gridline signal efficiently while maintaining the image details. The spectra of a 1D Fourier transform of the processed images demonstrate that, compared with some existing gridline removal methods, the proposed method has better information preservation after the removal of the gridline artifacts. Additionally, the performance speed is relatively high. Conclusions: The experimental results demonstrate the efficiency of the proposed method. Compared with some existing gridline removal methods, the proposed method can preserve more information within an acceptable execution time.« less

  6. Image fusion pitfalls for cranial radiosurgery.

    PubMed

    Jonker, Benjamin P

    2013-01-01

    Stereotactic radiosurgery requires imaging to define both the stereotactic space in which the treatment is delivered and the target itself. Image fusion is the process of using rotation and translation to bring a second image set into alignment with the first image set. This allows the potential concurrent use of multiple image sets to define the target and stereotactic space. While a single magnetic resonance imaging (MRI) sequence alone can be used for delineation of the target and fiducials, there may be significant advantages to using additional imaging sets including other MRI sequences, computed tomography (CT) scans, and advanced imaging sets such as catheter-based angiography, diffusor tension imaging-based fiber tracking and positon emission tomography in order to more accurately define the target and surrounding critical structures. Stereotactic space is usually defined by detection of fiducials on the stereotactic head frame or mask system. Unfortunately MRI sequences are susceptible to geometric distortion, whereas CT scans do not face this problem (although they have poorer resolution of the target in most cases). Thus image fusion can allow the definition of stereotactic space to proceed from the geometrically accurate CT images at the same time as using MRI to define the target. The use of image fusion is associated with risk of error introduced by inaccuracies of the fusion process, as well as workflow changes that if not properly accounted for can mislead the treating clinician. The purpose of this review is to describe the uses of image fusion in stereotactic radiosurgery as well as its potential pitfalls.

  7. 2-D traveltime and waveform inversion for improved seismic imaging: Naga Thrust and Fold Belt, India

    NASA Astrophysics Data System (ADS)

    Jaiswal, Priyank; Zelt, Colin A.; Bally, Albert W.; Dasgupta, Rahul

    2008-05-01

    Exploration along the Naga Thrust and Fold Belt in the Assam province of Northeast India encounters geological as well as logistic challenges. Drilling for hydrocarbons, traditionally guided by surface manifestations of the Naga thrust fault, faces additional challenges in the northeast where the thrust fault gradually deepens leaving subtle surface expressions. In such an area, multichannel 2-D seismic data were collected along a line perpendicular to the trend of the thrust belt. The data have a moderate signal-to-noise ratio and suffer from ground roll and other acquisition-related noise. In addition to data quality, the complex geology of the thrust belt limits the ability of conventional seismic processing to yield a reliable velocity model which in turn leads to poor subsurface image. In this paper, we demonstrate the application of traveltime and waveform inversion as supplements to conventional seismic imaging and interpretation processes. Both traveltime and waveform inversion utilize the first arrivals that are typically discarded during conventional seismic processing. As a first step, a smooth velocity model with long wavelength characteristics of the subsurface is estimated through inversion of the first-arrival traveltimes. This velocity model is then used to obtain a Kirchhoff pre-stack depth-migrated image which in turn is used for the interpretation of the fault. Waveform inversion is applied to the central part of the seismic line to a depth of ~1 km where the quality of the migrated image is poor. Waveform inversion is performed in the frequency domain over a series of iterations, proceeding from low to high frequency (11-19 Hz) using the velocity model from traveltime inversion as the starting model. In the end, the pre-stack depth-migrated image and the waveform inversion model are jointly interpreted. This study demonstrates that a combination of traveltime and waveform inversion with Kirchhoff pre-stack depth migration is a promising approach for the interpretation of geological structures in a thrust belt.

  8. Original non-stationary eddy current imaging process for the evaluation of defects in metallic structures

    NASA Astrophysics Data System (ADS)

    Placko, Dominique; Bore, Thierry; Rivollet, Alain; Joubert, Pierre-Yves

    2015-10-01

    This paper deals with the problem of imaging defects in metallic structures through eddy current (EC) inspections, and proposes an original process for a possible tomographical crack evaluation. This process is based on a semi analytical modeling, called "distributed point source method" (DPSM) which is used to describe and equate the interactions between the implemented EC probes and the structure under test. Several steps will be successively described, illustrating the feasibility of this new imaging process dedicated to the quantitative evaluation of defects. The basic principles of this imaging process firstly consist in creating a 3D grid by meshing the volume potentially inspected by the sensor. As a result, a given number of elemental volumes (called voxels) are obtained. Secondly, the DPSM modeling is used to compute an image for all occurrences in which only one of the voxels has a different conductivity among all the other ones. The assumption consists to consider that a real defect may be truly represented by a superimposition of elemental voxels: the resulting accuracy will naturally depend on the density of space sampling. On other hand, the excitation device of the EC imager has the capability to be oriented in several directions, and driven by an excitation current at variable frequency. So, the simulation will be performed for several frequencies and directions of the eddy currents induced in the structure, which increases the signal entropy. All these results are merged in a so-called "observation matrix" containing all the probe/structure interaction configurations. This matrix is then used in an inversion scheme in order to perform the evaluation of the defect location and geometry. The modeled EC data provided by the DPSM are compared to the experimental images provided by an eddy current imager (ECI), implemented on aluminum plates containing some buried defects. In order to validate the proposed inversion process, we feed it with computed images of various acquisition configurations. Additive noise was added to the images so that they are more representative of actual EC data. In the case of simple notch type defects, for which the relative conductivity may only take two extreme values (1 or 0), a threshold was introduced on the inverted images, in a post processing step, taking advantage of a priori knowledge of the statistical properties of the restored images. This threshold allowed to enhance the image contrast and has contributed to eliminate both the residual noise and the pixels showing non-realistic values.

  9. Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.

    PubMed

    Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil

    2018-01-25

    Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.

  10. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyhan, M; Yue, N

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation.more » Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize user interaction and processing time of radiochromic film used for in vivo dosimetry.« less

  11. Integration of digital signal processing technologies with pulsed electron paramagnetic resonance imaging

    PubMed Central

    Pursley, Randall H.; Salem, Ghadi; Devasahayam, Nallathamby; Subramanian, Sankaran; Koscielniak, Janusz; Krishna, Murali C.; Pohida, Thomas J.

    2006-01-01

    The integration of modern data acquisition and digital signal processing (DSP) technologies with Fourier transform electron paramagnetic resonance (FT-EPR) imaging at radiofrequencies (RF) is described. The FT-EPR system operates at a Larmor frequency (Lf) of 300 MHz to facilitate in vivo studies. This relatively low frequency Lf, in conjunction with our ~10 MHz signal bandwidth, enables the use of direct free induction decay time-locked subsampling (TLSS). This particular technique provides advantages by eliminating the traditional analog intermediate frequency downconversion stage along with the corresponding noise sources. TLSS also results in manageable sample rates that facilitate the design of DSP-based data acquisition and image processing platforms. More specifically, we utilize a high-speed field programmable gate array (FPGA) and a DSP processor to perform advanced real-time signal and image processing. The migration to a DSP-based configuration offers the benefits of improved EPR system performance, as well as increased adaptability to various EPR system configurations (i.e., software configurable systems instead of hardware reconfigurations). The required modifications to the FT-EPR system design are described, with focus on the addition of DSP technologies including the application-specific hardware, software, and firmware developed for the FPGA and DSP processor. The first results of using real-time DSP technologies in conjunction with direct detection bandpass sampling to implement EPR imaging at RF frequencies are presented. PMID:16243552

  12. Image processing analysis of nuclear track parameters for CR-39 detector irradiated by thermal neutron

    NASA Astrophysics Data System (ADS)

    Al-Jobouri, Hussain A.; Rajab, Mustafa Y.

    2016-03-01

    CR-39 detector which covered with boric acid (H3Bo3) pellet was irradiated by thermal neutrons from (241Am - 9Be) source with activity 12Ci and neutron flux 105 n. cm-2. s-1. The irradiation times -TD for detector were 4h, 8h, 16h and 24h. Chemical etching solution for detector was sodium hydroxide NaOH, 6.25N with 45 min etching time and 60 C˚ temperature. Images of CR-39 detector after chemical etching were taken from digital camera which connected from optical microscope. MATLAB software version 7.0 was used to image processing. The outputs of image processing of MATLAB software were analyzed and found the following relationships: (a) The irradiation time -TD has behavior linear relationships with following nuclear track parameters: i) total track number - NT ii) maximum track number - MRD (relative to track diameter - DT) at response region range 2.5 µm to 4 µm iii) maximum track number - MD (without depending on track diameter - DT). (b) The irradiation time -TD has behavior logarithmic relationship with maximum track number - MA (without depending on track area - AT). The image processing technique principally track diameter - DT can be take into account to classification of α-particle emitters, In addition to the contribution of these technique in preparation of nano- filters and nano-membrane in nanotechnology fields.

  13. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  14. WE-E-18C-01: Multi-Energy CT: Current Status and Recent Innovations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelc, N; McCollough, C; Yu, L

    2014-06-15

    Conventional computed tomography (CT) uses a single polychromatic x-ray spectrum and energy integrating detectors, and produces images whose contrast depends on the effective attenuation coefficient of the broad spectrum beam. This can introduce errors from beam hardening and does not produce the optimal contrast-to-noise ratio. In addition, multiple materials can have the same effective attenuation coefficient, causing different materials to be indistinguishable in conventional CT images. If transmission measurements at two or more energies are obtained, even with polychromatic beams, more specific information about the object can be obtained. If the object does not contain materials with k-edges in themore » spectrum, the x-ray attenuation can be well-approximated by a linear combination of two processes (photoelectric absorption and Compton scattering) or, equivalently, two basis materials. For such cases, two spectral measurements suffice, although additional measurements can provide higher precision. If K-edge materials are present, additional spectral measurements can allow these materials to be isolated. Current commercial implementations use varied approaches, including two sources operating a different kVp, one source whose kVp is rapidly switched in a single scan, and a dual layer detector that can provide spectral information in every reading. Processing of the spectral information can be performed in the raw data domain or in the image domain. The process of calculating the amount of the two basis functions implicitly corrects for beam hardening and therefore can lead to improvements in quantitative accuracy. Information can be extracted to provide material specific information beyond that of conventional CT. This additional information has been shown to be important in several clinical applications, and can also lead to more efficient clinical protocols. Recent innovations in x-ray sources, detectors, and systems have made multi-energy CT much more practical and improved its performance. In addition, this is a very active area of research and further improvements are expected through further technological improvements. Learning Objectives: Basic principles of multi-energy CT Current implementations of mutli-energy CT Data and image analysis methods in multi-energy CT Current clinical applications of dual energy CT5. recent innovations and anticipated advances in multi-energy CT.« less

  15. Mars Orbiter Camera Views the 'Face on Mars' - Calibrated, contrast enhanced, filtered,

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.

    The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long. Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps:

    The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking.

    The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales.

    The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth.

    A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.

    See PIA01440-1442 for additional processing steps. Also see PIA01236 for the raw image.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  16. Mars Orbiter Camera Views the 'Face on Mars' - Calibrated, contrast enhanced, filtered

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.

    The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long. Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps:

    The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking.

    The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales.

    The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth.

    A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.

    See PIA01441-1442 for additional processing steps. Also see PIA01236 for the raw image.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  17. A Freeware Path to Neutron Computed Tomography

    NASA Astrophysics Data System (ADS)

    Schillinger, Burkhard; Craft, Aaron E.

    Neutron computed tomography has become a routine method at many neutron sources due to the availability of digital detection systems, powerful computers and advanced software. The commercial packages Octopus by Inside Matters and VGStudio by Volume Graphics have been established as a quasi-standard for high-end computed tomography. However, these packages require a stiff investment and are available to the users only on-site at the imaging facility to do their data processing. There is a demand from users to have image processing software at home to do further data processing; in addition, neutron computed tomography is now being introduced even at smaller and older reactors. Operators need to show a first working tomography setup before they can obtain a budget to build an advanced tomography system. Several packages are available on the web for free; however, these have been developed for X-rays or synchrotron radiation and are not immediately useable for neutron computed tomography. Three reconstruction packages and three 3D-viewers have been identified and used even for Gigabyte datasets. This paper is not a scientific publication in the classic sense, but is intended as a review to provide searchable help to make the described packages usable for the tomography community. It presents the necessary additional preprocessing in ImageJ, some workarounds for bugs in the software, and undocumented or badly documented parameters that need to be adapted for neutron computed tomography. The result is a slightly complicated, but surprisingly high-quality path to neutron computed tomography images in 3D, but not a replacement for the even more powerful commercial software mentioned above.

  18. Detection and Evaluation of Skin Disorders by One of Photogrammetric Image Analysis Methods

    NASA Astrophysics Data System (ADS)

    Güçin, M.; Patias, P.; Altan, M. O.

    2012-08-01

    Abnormalities on skin may vary from simple acne to painful wounds which affect a person's life quality. Detection of these kinds of disorders in early stages, followed by the evaluation of abnormalities is of high importance. At this stage, photogrammetry offers a non-contact solution to this concern by providing geometric highly accurate data. Photogrammetry, which has been used for firstly topographic purposes, in virtue of terrestrial photogrammetry became useful technique in non-topographic applications also (Wolf et al., 2000). Moreover the extension of usage of photogrammetry, in parallel with the development in technology, analogue photographs are replaced with digital images and besides digital image processing techniques, it provides modification of digital images by using filters, registration processes etc. Besides, photogrammetry (using same coordinate system by registration of images) can serve as a tool for the comparison of temporal imaging data. The aim of this study is to examine several digital image processing techniques, in particular the digital filters, which might be useful to determine skin disorders. In our study we examine affordable to purchase, user friendly software which needs neither expertise nor pre-training. Since it is a pre-work for subsequent and deeper studies, Adobe Photoshop 7.0 is used as a present software. In addition to that Adobe Photoshop released a DesAcc plug-ins with CS3 version and provides full compatibility with DICOM (Digital Imaging and Communications in Medicine) and PACS (Picture Archiving and Communications System) that enables doctors to store all medical data together with relevant images and share if necessary.

  19. MO-PIS-Exhibit Hall-01: Tools for TG-142 Linac Imaging QA I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clements, M; Wiesmeyer, M

    2014-06-15

    Partners in Solutions is an exciting new program in which AAPM partners with our vendors to present practical “hands-on” information about the equipment and software systems that we use in our clinics. The therapy topic this year is solutions for TG-142 recommendations for linear accelerator imaging QA. Note that the sessions are being held in a special purpose room built on the Exhibit Hall Floor, to encourage further interaction with the vendors. Automated Imaging QA for TG-142 with RIT Presentation Time: 2:45 – 3:15 PM This presentation will discuss software tools for automated imaging QA and phantom analysis for TG-142.more » All modalities used in radiation oncology will be discussed, including CBCT, planar kV imaging, planar MV imaging, and imaging and treatment coordinate coincidence. Vendor supplied phantoms as well as a variety of third-party phantoms will be shown, along with appropriate analyses, proper phantom setup procedures and scanning settings, and a discussion of image quality metrics. Tools for process automation will be discussed which include: RIT Cognition (machine learning for phantom image identification), RIT Cerberus (automated file system monitoring and searching), and RunQueueC (batch processing of multiple images). In addition to phantom analysis, tools for statistical tracking, trending, and reporting will be discussed. This discussion will include an introduction to statistical process control, a valuable tool in analyzing data and determining appropriate tolerances. An Introduction to TG-142 Imaging QA Using Standard Imaging Products Presentation Time: 3:15 – 3:45 PM Medical Physicists want to understand the logic behind TG-142 Imaging QA. What is often missing is a firm understanding of the connections between the EPID and OBI phantom imaging, the software “algorithms” that calculate the QA metrics, the establishment of baselines, and the analysis and interpretation of the results. The goal of our brief presentation will be to establish and solidify these connections. Our talk will be motivated by the Standard Imaging, Inc. phantom and software solutions. We will present and explain each of the image quality metrics in TG-142 in terms of the theory, mathematics, and algorithms used to implement them in the Standard Imaging PIPSpro software. In the process, we will identify the regions of phantom images that are analyzed by each algorithm. We then will discuss the process of the creation of baselines and typical ranges of acceptable values for each imaging quality metric.« less

  20. Laser beam welding quality monitoring system based in high-speed (10 kHz) uncooled MWIR imaging sensors

    NASA Astrophysics Data System (ADS)

    Linares, Rodrigo; Vergara, German; Gutiérrez, Raúl; Fernández, Carlos; Villamayor, Víctor; Gómez, Luis; González-Camino, Maria; Baldasano, Arturo; Castro, G.; Arias, R.; Lapido, Y.; Rodríguez, J.; Romero, Pablo

    2015-05-01

    The combination of flexibility, productivity, precision and zero-defect manufacturing in future laser-based equipment are a major challenge that faces this enabling technology. New sensors for online monitoring and real-time control of laserbased processes are necessary for improving products quality and increasing manufacture yields. New approaches to fully automate processes towards zero-defect manufacturing demand smarter heads where lasers, optics, actuators, sensors and electronics will be integrated in a unique compact and affordable device. Many defects arising in laser-based manufacturing processes come from instabilities in the dynamics of the laser process. Temperature and heat dynamics are key parameters to be monitored. Low cost infrared imagers with high-speed of response will constitute the next generation of sensors to be implemented in future monitoring and control systems for laser-based processes, capable to provide simultaneous information about heat dynamics and spatial distribution. This work describes the result of using an innovative low-cost high-speed infrared imager based on the first quantum infrared imager monolithically integrated with Si-CMOS ROIC of the market. The sensor is able to provide low resolution images at frame rates up to 10 KHz in uncooled operation at the same cost as traditional infrared spot detectors. In order to demonstrate the capabilities of the new sensor technology, a low-cost camera was assembled on a standard production laser welding head, allowing to register melting pool images at frame rates of 10 kHz. In addition, a specific software was developed for defect detection and classification. Multiple laser welding processes were recorded with the aim to study the performance of the system and its application to the real-time monitoring of laser welding processes. During the experiments, different types of defects were produced and monitored. The classifier was fed with the experimental images obtained. Self-learning strategies were implemented with very promising results, demonstrating the feasibility of using low-cost high-speed infrared imagers in advancing towards a real-time / in-line zero-defect production systems.

  1. "Proximal Sensing" capabilities for snow cover monitoring

    NASA Astrophysics Data System (ADS)

    Valt, Mauro; Salvatori, Rosamaria; Plini, Paolo; Salzano, Roberto; Giusti, Marco; Montagnoli, Mauro; Sigismondi, Daniele; Cagnati, Anselmo

    2013-04-01

    The seasonal snow cover represents one of the most important land cover class in relation to environmental studies in mountain areas, especially considering its variation during time. Snow cover and its extension play a relevant role for the studies on the atmospheric dynamics and the evolution of climate. It is also important for the analysis and management of water resources and for the management of touristic activities in mountain areas. Recently, webcam images collected at daily or even hourly intervals are being used as tools to observe the snow covered areas; those images, properly processed, can be considered a very important environmental data source. Images captured by digital cameras become a useful tool at local scale providing images even when the cloud coverage makes impossible the observation by satellite sensors. When suitably processed these images can be used for scientific purposes, having a good resolution (at least 800x600x16 million colours) and a very good sampling frequency (hourly images taken through the whole year). Once stored in databases, those images represent therefore an important source of information for the study of recent climatic changes, to evaluate the available water resources and to analyse the daily surface evolution of the snow cover. The Snow-noSnow software has been specifically designed to automatically detect the extension of snow cover collected from webcam images with a very limited human intervention. The software was tested on images collected on Alps (ARPAV webcam network) and on Apennine in a pilot station properly equipped for this project by CNR-IIA. The results obtained through the use of Snow-noSnow are comparable to the one achieved by photo-interpretation and could be considered as better as the ones obtained using the image segmentation routine implemented into image processing commercial softwares. Additionally, Snow-noSnow operates in a semi-automatic way and has a reduced processing time. The analysis of this kind of images could represent an useful element to support the interpretation of remote sensing images, especially those provided by high spatial resolution sensors. Keywords: snow cover monitoring, digital images, software, Alps, Apennines.

  2. MEM application to IRAS CPC images

    NASA Technical Reports Server (NTRS)

    Marston, A. P.

    1994-01-01

    A method for applying the Maximum Entropy Method (MEM) to Chopped Photometric Channel (CPC) IRAS additional observations is illustrated. The original CPC data suffered from problems with repeatability which MEM is able to cope with by use of a noise image, produced from the results of separate data scans of objects. The process produces images of small areas of sky with circular Gaussian beams of approximately 30 in. full width half maximum resolution at 50 and 100 microns. Comparison is made to previous reconstructions made in the far-infrared as well as morphologies of objects at other wavelengths. Some projects with this dataset are discussed.

  3. Using Graphical Processing Units to Accelerate Orthorectification, Atmospheric Correction and Transformations for Big Data

    NASA Astrophysics Data System (ADS)

    O'Connor, A. S.; Justice, B.; Harris, A. T.

    2013-12-01

    Graphics Processing Units (GPUs) are high-performance multiple-core processors capable of very high computational speeds and large data throughput. Modern GPUs are inexpensive and widely available commercially. These are general-purpose parallel processors with support for a variety of programming interfaces, including industry standard languages such as C. GPU implementations of algorithms that are well suited for parallel processing can often achieve speedups of several orders of magnitude over optimized CPU codes. Significant improvements in speeds for imagery orthorectification, atmospheric correction, target detection and image transformations like Independent Components Analsyis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x - 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization. Testing Exelis VIS has performed shows that orthorectification can take as long as two hours with a WorldView1 35,0000 x 35,000 pixel image. With GPU orthorecification, the same orthorectification process takes three minutes. By speeding up image processing, imagery can successfully be used by first responders, scientists making rapid discoveries with near real time data, and provides an operational component to data centers needing to quickly process and disseminate data.

  4. Orthorectified High Resolution Multispectral Imagery for Application to Change Detection and Analysis

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.

    1997-01-01

    The project team has outlined several technical objectives which will allow the companies to improve on their current capabilities. These include modifications to the imaging system, enabling it to operate more cost effectively and with greater ease of use, automation of the post-processing software to mosaic and orthorectify the image scenes collected, and the addition of radiometric calibration to greatly aid in the ability to perform accurate change detection. Business objectives include fine tuning of the market plan plus specification of future product requirements, expansion of sales activities (including identification of necessary additional resources required to meet stated revenue objectives), development of a product distribution plan, and implementation of a world wide sales effort.

  5. Exploitation of commercial remote sensing images: reality ignored?

    NASA Astrophysics Data System (ADS)

    Allen, Paul C.

    1999-12-01

    The remote sensing market is on the verge of being awash in commercial high-resolution images. Market estimates are based on the growing numbers of planned commercial remote sensing electro-optical, radar, and hyperspectral satellites and aircraft. EarthWatch, Space Imaging, SPOT, and RDL among others are all working towards launch and service of one to five meter panchromatic or radar-imaging satellites. Additionally, new advances in digital air surveillance and reconnaissance systems, both manned and unmanned, are also expected to expand the geospatial customer base. Regardless of platform, image type, or location, each system promises images with some combination of increased resolution, greater spectral coverage, reduced turn-around time (request-to- delivery), and/or reduced image cost. For the most part, however, market estimates for these new sources focus on the raw digital images (from collection to the ground station) while ignoring the requirements for a processing and exploitation infrastructure comprised of exploitation tools, exploitation training, library systems, and image management systems. From this it would appear the commercial imaging community has failed to learn the hard lessons of national government experience choosing instead to ignore reality and replicate the bias of collection over processing and exploitation. While this trend may be not impact the small quantity users that exist today it will certainly adversely affect the mid- to large-sized users of the future.

  6. General-purpose interface bus for multiuser, multitasking computer system

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1990-01-01

    The architecture of a multiuser, multitasking, virtual-memory computer system intended for the use by a medium-size research group is described. There are three central processing units (CPU) in the configuration, each with 16 MB memory, and two 474 MB hard disks attached. CPU 1 is designed for data analysis and contains an array processor for fast-Fourier transformations. In addition, CPU 1 shares display images viewed with the image processor. CPU 2 is designed for image analysis and display. CPU 3 is designed for data acquisition and contains 8 GPIB channels and an analog-to-digital conversion input/output interface with 16 channels. Up to 9 users can access the third CPU simultaneously for data acquisition. Focus is placed on the optimization of hardware interfaces and software, facilitating instrument control, data acquisition, and processing.

  7. The Elixir System: Data Characterization and Calibration at the Canada-France-Hawaii Telescope

    NASA Astrophysics Data System (ADS)

    Magnier, E. A.; Cuillandre, J.-C.

    2004-05-01

    The Elixir System at the Canada-France-Hawaii Telescope performs data characterization and calibration for all data from the wide-field mosaic imagers CFH12K and MegaPrime. The project has several related goals, including monitoring data quality, providing high-quality master detrend images, determining the photometric and astrometric calibrations, and automatic preprocessing of images for queued service observing (QSO). The Elixir system has been used for all data obtained with CFH12K since the QSO project began in 2001 January. In addition, it has been used to process archival data from the CFH12K and all MegaPrime observations beginning in 2002 December. The Elixir system has been extremely successful in providing well-characterized data to the end observers, who may otherwise be overwhelmed by data-processing concerns.

  8. Magellan mission summary

    NASA Technical Reports Server (NTRS)

    Saunders, R. S.; Spear, A. J.; Allin, P. C.; Austin, R. S.; Berman, A. L.; Chandlee, R. C.; Clark, J.; Decharon, A. V.; De Jong, E. M.; Griffith, D. G.

    1992-01-01

    Magellan started mapping the planet Venus on September 15, 1990, and after one cycle (one Venus day or 243 earth days) had mapped 84 percent of the planet's surface. This returned an image data volume greater than all past planetary missions combined. Spacecraft problems were experienced in flight. Changes in operational procedures and reprogramming of onboard computers minimized the amount of mapping data lost. Magellan data processing is the largest planetary image-processing challenge to date. Compilation of global maps of tectonic and volcanic features, as well as impact craters and related phenomena and surface processes related to wind, weathering, and mass wasting, has begun. The Magellan project is now in an extended mission phase, with plans for additional cycles out to 1995. The Magellan project will fill in mapping gaps, obtain a global gravity data set between mid-September 1992 and May 1993, acquire images at different view angles, and look for changes on the surface from one cycle to another caused by surface activity such as volcanism, faulting, or wind activity.

  9. Study of talcum charging status in parallel plate electrostatic separator based on particle trajectory analysis

    NASA Astrophysics Data System (ADS)

    Yunxiao, CAO; Zhiqiang, WANG; Jinjun, WANG; Guofeng, LI

    2018-05-01

    Electrostatic separation has been extensively used in mineral processing, and has the potential to separate gangue minerals from raw talcum ore. As for electrostatic separation, the particle charging status is one of important influence factors. To describe the talcum particle charging status in a parallel plate electrostatic separator accurately, this paper proposes a modern images processing method. Based on the actual trajectories obtained from sequence images of particle movement and the analysis of physical forces applied on a charged particle, a numerical model is built, which could calculate the charge-to-mass ratios represented as the charging status of particle and simulate the particle trajectories. The simulated trajectories agree well with the experimental results obtained by images processing. In addition, chemical composition analysis is employed to reveal the relationship between ferrum gangue mineral content and charge-to-mass ratios. Research results show that the proposed method is effective for describing the particle charging status in electrostatic separation.

  10. MARS-MD: rejection based image domain material decomposition

    NASA Astrophysics Data System (ADS)

    Bateman, C. J.; Knight, D.; Brandwacht, B.; McMahon, J.; Healy, J.; Panta, R.; Aamir, R.; Rajendran, K.; Moghiseh, M.; Ramyar, M.; Rundle, D.; Bennett, J.; de Ruiter, N.; Smithies, D.; Bell, S. T.; Doesburg, R.; Chernoglazov, A.; Mandalika, V. B. H.; Walsh, M.; Shamshad, M.; Anjomrouz, M.; Atharifard, A.; Vanden Broeke, L.; Bheesette, S.; Kirkbride, T.; Anderson, N. G.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Butler, A. P. H.; Butler, P. H.

    2018-05-01

    This paper outlines image domain material decomposition algorithms that have been routinely used in MARS spectral CT systems. These algorithms (known collectively as MARS-MD) are based on a pragmatic heuristic for solving the under-determined problem where there are more materials than energy bins. This heuristic contains three parts: (1) splitting the problem into a number of possible sub-problems, each containing fewer materials; (2) solving each sub-problem; and (3) applying rejection criteria to eliminate all but one sub-problem's solution. An advantage of this process is that different constraints can be applied to each sub-problem if necessary. In addition, the result of this process is that solutions will be sparse in the material domain, which reduces crossover of signal between material images. Two algorithms based on this process are presented: the Segmentation variant, which uses segmented material classes to define each sub-problem; and the Angular Rejection variant, which defines the rejection criteria using the angle between reconstructed attenuation vectors.

  11. Generalized Chirp Scaling Combined with Baseband Azimuth Scaling Algorithm for Large Bandwidth Sliding Spotlight SAR Imaging

    PubMed Central

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-01-01

    This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057

  12. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  13. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  14. Disentangling brain activity related to the processing of emotional visual information and emotional arousal.

    PubMed

    Kuniecki, Michał; Wołoszyn, Kinga; Domagalik, Aleksandra; Pilarczyk, Joanna

    2018-05-01

    Processing of emotional visual information engages cognitive functions and induces arousal. We aimed to examine the modulatory role of emotional valence on brain activations linked to the processing of visual information and those linked to arousal. Participants were scanned and their pupil size was measured while viewing negative and neutral images. The visual noise was added to the images in various proportions to parametrically manipulate the amount of visual information. Pupil size was used as an index of physiological arousal. We show that arousal induced by the negative images, as compared to the neutral ones, is primarily related to greater amygdala activity while increasing visibility of negative content to enhanced activity in the lateral occipital complex (LOC). We argue that more intense visual processing of negative scenes can occur irrespective of the level of arousal. It may suggest that higher areas of the visual stream are fine-tuned to process emotionally relevant objects. Both arousal and processing of emotional visual information modulated activity within the ventromedial prefrontal cortex (vmPFC). Overlapping activations within the vmPFC may reflect the integration of these aspects of emotional processing. Additionally, we show that emotionally-evoked pupil dilations are related to activations in the amygdala, vmPFC, and LOC.

  15. F3D Image Processing and Analysis for Many - and Multi-core Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    F3D is written in OpenCL, so it achieve[sic] platform-portable parallelism on modern mutli-core CPUs and many-core GPUs. The interface and mechanims to access F3D core are written in Java as a plugin for Fiji/ImageJ to deliver several key image-processing algorithms necessary to remove artifacts from micro-tomography data. The algorithms consist of data parallel aware filters that can efficiently utilizes[sic] resources and can work on out of core datasets and scale efficiently across multiple accelerators. Optimizing for data parallel filters, streaming out of core datasets, and efficient resource and memory and data managements over complex execution sequence of filters greatly expeditesmore » any scientific workflow with image processing requirements. F3D performs several different types of 3D image processing operations, such as non-linear filtering using bilateral filtering and/or median filtering and/or morphological operators (MM). F3D gray-level MM operators are one-pass constant time methods that can perform morphological transformations with a line-structuring element oriented in discrete directions. Additionally, MM operators can be applied to gray-scale images, and consist of two parts: (a) a reference shape or structuring element, which is translated over the image, and (b) a mechanism, or operation, that defines the comparisons to be performed between the image and the structuring element. This tool provides a critical component within many complex pipelines such as those for performing automated segmentation of image stacks. F3D is also called a "descendent" of Quant-CT, another software we developed in the past. These two modules are to be integrated in a next version. Further details were reported in: D.M. Ushizima, T. Perciano, H. Krishnan, B. Loring, H. Bale, D. Parkinson, and J. Sethian. Structure recognition from high-resolution images of ceramic composites. IEEE International Conference on Big Data, October 2014.« less

  16. Hierarchical classification strategy for Phenotype extraction from epidermal growth factor receptor endocytosis screening.

    PubMed

    Cao, Lu; Graauw, Marjo de; Yan, Kuan; Winkel, Leah; Verbeek, Fons J

    2016-05-03

    Endocytosis is regarded as a mechanism of attenuating the epidermal growth factor receptor (EGFR) signaling and of receptor degradation. There is increasing evidence becoming available showing that breast cancer progression is associated with a defect in EGFR endocytosis. In order to find related Ribonucleic acid (RNA) regulators in this process, high-throughput imaging with fluorescent markers is used to visualize the complex EGFR endocytosis process. Subsequently a dedicated automatic image and data analysis system is developed and applied to extract the phenotype measurement and distinguish different developmental episodes from a huge amount of images acquired through high-throughput imaging. For the image analysis, a phenotype measurement quantifies the important image information into distinct features or measurements. Therefore, the manner in which prominent measurements are chosen to represent the dynamics of the EGFR process becomes a crucial step for the identification of the phenotype. In the subsequent data analysis, classification is used to categorize each observation by making use of all prominent measurements obtained from image analysis. Therefore, a better construction for a classification strategy will support to raise the performance level in our image and data analysis system. In this paper, we illustrate an integrated analysis method for EGFR signalling through image analysis of microscopy images. Sophisticated wavelet-based texture measurements are used to obtain a good description of the characteristic stages in the EGFR signalling. A hierarchical classification strategy is designed to improve the recognition of phenotypic episodes of EGFR during endocytosis. Different strategies for normalization, feature selection and classification are evaluated. The results of performance assessment clearly demonstrate that our hierarchical classification scheme combined with a selected set of features provides a notable improvement in the temporal analysis of EGFR endocytosis. Moreover, it is shown that the addition of the wavelet-based texture features contributes to this improvement. Our workflow can be applied to drug discovery to analyze defected EGFR endocytosis processes.

  17. Data on the surface morphology of additively manufactured Ti-6Al-4V implants during processing by plasma electrolytic oxidation.

    PubMed

    van Hengel, Ingmar A J; Riool, Martijn; Fratila-Apachitei, Lidy E; Witte-Bouma, Janneke; Farrell, Eric; Zadpoor, Amir A; Zaat, Sebastian A J; Apachitei, Iulian

    2017-08-01

    Additively manufactured Ti-6Al-4V implants were biofunctionalized using plasma electrolytic oxidation. At various time points during this process scanning electron microscopy imaging was performed to analyze the surface morphology (van Hengel et al., 2017) [1]. This data shows the changes in surface morphology during plasma electrolytic oxidation. Data presented in this article are related to the research article "Selective laser melting porous metallic implants with immobilized silver nanoparticles kill and prevent biofilm formation by methicillin-resistant Staphylococcus aureus" (van Hengel et al., 2017) [1].

  18. Investigation into image quality difference between total variation and nonlinear sparsifying transform based compressed sensing

    NASA Astrophysics Data System (ADS)

    Dong, Jian; Kudo, Hiroyuki

    2017-03-01

    Compressed sensing (CS) is attracting growing concerns in sparse-view computed tomography (CT) image reconstruction. The most standard approach of CS is total variation (TV) minimization. However, images reconstructed by TV usually suffer from distortions, especially in reconstruction of practical CT images, in forms of patchy artifacts, improper serrate edges and loss of image textures. Most existing CS approaches including TV achieve image quality improvement by applying linear transforms to object image, but linear transforms usually fail to take discontinuities into account, such as edges and image textures, which is considered to be the key reason for image distortions. Actually, discussions on nonlinear filter based image processing has a long history, leading us to clarify that the nonlinear filters yield better results compared to linear filters in image processing task such as denoising. Median root prior was first utilized by Alenius as nonlinear transform in CT image reconstruction, with significant gains obtained. Subsequently, Zhang developed the application of nonlocal means-based CS. A fact is gradually becoming clear that the nonlinear transform based CS has superiority in improving image quality compared with the linear transform based CS. However, it has not been clearly concluded in any previous paper within the scope of our knowledge. In this work, we investigated the image quality differences between the conventional TV minimization and nonlinear sparsifying transform based CS, as well as image quality differences among different nonlinear sparisying transform based CSs in sparse-view CT image reconstruction. Additionally, we accelerated the implementation of nonlinear sparsifying transform based CS algorithm.

  19. Shuttle imaging radar-C science plan

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The Shuttle Imaging Radar-C (SIR-C) mission will yield new and advanced scientific studies of the Earth. SIR-C will be the first instrument to simultaneously acquire images at L-band and C-band with HH, VV, HV, or VH polarizations, as well as images of the phase difference between HH and VV polarizations. These data will be digitally encoded and recorded using onboard high-density digital tape recorders and will later be digitally processed into images using the JPL Advanced Digital SAR Processor. SIR-C geologic studies include cold-region geomorphology, fluvial geomorphology, rock weathering and erosional processes, tectonics and geologic boundaries, geobotany, and radar stereogrammetry. Hydrology investigations cover arid, humid, wetland, snow-covered, and high-latitude regions. Additionally, SIR-C will provide the data to identify and map vegetation types, interpret landscape patterns and processes, assess the biophysical properties of plant canopies, and determine the degree of radar penetration of plant canopies. In oceanography, SIR-C will provide the information necessary to: forecast ocean directional wave spectra; better understand internal wave-current interactions; study the relationship of ocean-bottom features to surface expressions and the correlation of wind signatures to radar backscatter; and detect current-system boundaries, oceanic fronts, and mesoscale eddies. And, as the first spaceborne SAR with multi-frequency, multipolarization imaging capabilities, whole new areas of glaciology will be opened for study when SIR-C is flown in a polar orbit.

  20. Additive manufacturing of reflective optics: evaluating finishing methods

    NASA Astrophysics Data System (ADS)

    Leuteritz, G.; Lachmayer, R.

    2018-02-01

    Individually shaped light distributions become more and more important in lighting technologies and thus the importance of additively manufactured reflectors increases significantly. The vast field of applications ranges from automotive lighting to medical imaging and bolsters the statement. However, the surfaces of additively manufactured reflectors suffer from insufficient optical properties even when manufactured using optimized process parameters for the Selective Laser Melting (SLM) process. Therefore post-process treatments of reflectors are necessary in order to further enhance their optical quality. This work concentrates on the effectiveness of post-process procedures for reflective optics. Based on already optimized aluminum reflectors, which are manufactured with a SLM machine, the parts are differently machined after the SLM process. Selected finishing methods like laser polishing, sputtering or sand blasting are applied and their effects quantified and compared. The post-process procedures are investigated on their impact on surface roughness and reflectance as well as geometrical precision. For each finishing method a demonstrator will be created and compared to a fully milled sample and among themselves. Ultimately, guidelines are developed in order to figure out the optimal treatment of additively manufactured reflectors regarding their optical and geometrical properties. Simulations of the light distributions will be validated with the developed demonstrators.

  1. Virtual environments from panoramic images

    NASA Astrophysics Data System (ADS)

    Chapman, David P.; Deacon, Andrew

    1998-12-01

    A number of recent projects have demonstrated the utility of Internet-enabled image databases for the documentation of complex, inaccessible and potentially hazardous environments typically encountered in the petrochemical and nuclear industries. Unfortunately machine vision and image processing techniques have not, to date, enabled the automatic extraction geometrical data from such images and thus 3D CAD modeling remains an expensive and laborious manual activity. Recent developments in panoramic image capture and presentation offer an alternative intermediate deliverable which, in turn, offers some of the benefits of a 3D model at a fraction of the cost. Panoramic image display tools such as Apple's QuickTime VR (QTVR) and Live Spaces RealVR provide compelling and accessible digital representations of the real world and justifiably claim to 'put the reality in Virtual Reality.' This paper will demonstrate how such technologies can be customized, extended and linked to facility management systems delivered over a corporate intra-net to enable end users to become familiar with remote sites and extract simple dimensional data. In addition strategies for the integration of such images with documents gathered from 2D or 3D CAD and Process and Instrumentation Diagrams (P&IDs) will be described as will techniques for precise 'As-Built' modeling using the calibrated images from which panoramas have been derived and the use of textures from these images to increase the realism of rendered scenes. A number of case studies relating to both nuclear and process engineering will demonstrate the extent to which such solution are scaleable in order to deal with the very large volumes of image data required to fully document the large, complex facilities typical of these industry sectors.

  2. Spatial resolution recovery utilizing multi-ray tracing and graphic processing unit in PET image reconstruction.

    PubMed

    Liang, Yicheng; Peng, Hao

    2015-02-07

    Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.

  3. Novel Card Games for Learning Radiographic Image Quality and Urologic Imaging in Veterinary Medicine.

    PubMed

    Ober, Christopher P

    Second-year veterinary students are often challenged by concepts in veterinary radiology, including the fundamentals of image quality and generation of differential lists. Four card games were developed to provide veterinary students with a supplemental means of learning about radiographic image quality and differential diagnoses in urogenital imaging. Students played these games and completed assessments of their subject knowledge before and after playing. The hypothesis was that playing each game would improve students' understanding of the topic area. For each game, students who played the game performed better on the post-test than students who did not play that game (all p<.01). For three of the four games, students who played each respective game demonstrated significant improvement in scores between the pre-test and the post-test (p<.002). The majority of students expressed that the games were both helpful and enjoyable. Educationally focused games can help students learn classroom and laboratory material. However, game design is important, as the game using the most passive learning process also demonstrated the weakest results. In addition, based on participants' comments, the games were very useful in improving student engagement in the learning process. Thus, use of games in the classroom and laboratory setting seems to benefit the learning process.

  4. Using a Smartphone Camera for Nanosatellite Attitude Determination

    NASA Astrophysics Data System (ADS)

    Shimmin, R.

    2014-09-01

    The PhoneSat project at NASA Ames Research Center has repeatedly flown a commercial cellphone in space. As this project continues, additional utility is being extracted from the cell phone hardware to enable more complex missions. The camera in particular shows great potential as an instrument for position and attitude determination, but this requires complex image processing. This paper outlines progress towards that image processing capability. Initial tests on a small collection of sample images have demonstrated the determination of a Moon vector from an image by automatic thresholding and centroiding, allowing the calibration of existing attitude control systems. Work has been undertaken on a further set of sample images towards horizon detection using a variety of techniques including thresholding, edge detection, applying a Hough transform, and circle fitting. Ultimately it is hoped this will allow calculation of an Earth vector for attitude determination and an approximate altitude. A quick discussion of work towards using the camera as a star tracker is then presented, followed by an introduction to further applications of the camera on space missions.

  5. An image analysis of TLC patterns for quality control of saffron based on soil salinity effect: A strategy for data (pre)-processing.

    PubMed

    Sereshti, Hassan; Poursorkh, Zahra; Aliakbarzadeh, Ghazaleh; Zarre, Shahin; Ataolahi, Sahar

    2018-01-15

    Quality of saffron, a valuable food additive, could considerably affect the consumers' health. In this work, a novel preprocessing strategy for image analysis of saffron thin layer chromatographic (TLC) patterns was introduced. This includes performing a series of image pre-processing techniques on TLC images such as compression, inversion, elimination of general baseline (using asymmetric least squares (AsLS)), removing spots shift and concavity (by correlation optimization warping (COW)), and finally conversion to RGB chromatograms. Subsequently, an unsupervised multivariate data analysis including principal component analysis (PCA) and k-means clustering was utilized to investigate the soil salinity effect, as a cultivation parameter, on saffron TLC patterns. This method was used as a rapid and simple technique to obtain the chemical fingerprints of saffron TLC images. Finally, the separated TLC spots were chemically identified using high-performance liquid chromatography-diode array detection (HPLC-DAD). Accordingly, the saffron quality from different areas of Iran was evaluated and classified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Attack to AN Image Encryption Based on Chaotic Logistic Map

    NASA Astrophysics Data System (ADS)

    Wang, Xing-Yuan; Chen, Feng; Wang, Tian; Xu, Dahai; Ma, Yutian

    2013-10-01

    This paper offers two different attacks on a freshly proposed image encryption based on chaotic logistic map. The cryptosystem under study first uses a secret key of 80-bit and employed two chaotic logistic maps. We derived the initial conditions of the logistic maps from using the secret key by providing different weights to all its bits. Additionally, in this paper eight different types of procedures are used to encrypt the pixels of an image in the proposed encryption process of which one of them will be used for a certain pixel which is determined by the product of the logistic map. The secret key is revised after encrypting each block which consisted of 16 pixels of the image. The encrypting process have weakness, worst of which is that every byte of plaintext is independent when substituted, so the cipher text of the byte will not change even the other bytes have changed. As a result of weakness, a chosen plaintext attack and a chosen cipher text attack can be completed without any knowledge of the key value to recuperate the ciphered image.

  7. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  8. Clustered functional MRI of overt speech production.

    PubMed

    Sörös, Peter; Sokoloff, Lisa Guttman; Bose, Arpita; McIntosh, Anthony R; Graham, Simon J; Stuss, Donald T

    2006-08-01

    To investigate the neural network of overt speech production, event-related fMRI was performed in 9 young healthy adult volunteers. A clustered image acquisition technique was chosen to minimize speech-related movement artifacts. Functional images were acquired during the production of oral movements and of speech of increasing complexity (isolated vowel as well as monosyllabic and trisyllabic utterances). This imaging technique and behavioral task enabled depiction of the articulo-phonologic network of speech production from the supplementary motor area at the cranial end to the red nucleus at the caudal end. Speaking a single vowel and performing simple oral movements involved very similar activation of the cortical and subcortical motor systems. More complex, polysyllabic utterances were associated with additional activation in the bilateral cerebellum, reflecting increased demand on speech motor control, and additional activation in the bilateral temporal cortex, reflecting the stronger involvement of phonologic processing.

  9. 32 CFR 701.54 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Computer search is based on the total cost of the central processing unit, input-output devices, and memory... charge for office copy up to six images)—$3.50 Each additional image—$ .10 Each typewritten page—$3.50...

  10. 32 CFR 701.54 - Collection of fees and fee rates for technical data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Computer search is based on the total cost of the central processing unit, input-output devices, and memory... charge for office copy up to six images)—$3.50 Each additional image—$ .10 Each typewritten page—$3.50...

  11. Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.

    PubMed

    Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2017-06-01

    Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.

  12. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  13. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  14. MISTICA: Minimum Spanning Tree-based Coarse Image Alignment for Microscopy Image Sequences

    PubMed Central

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T.

    2016-01-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to re-order the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries. PMID:26415193

  15. MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.

    PubMed

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T

    2016-11-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.

  16. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    PubMed

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  17. Seam tracking with adaptive image capture for fine-tuning of a high power laser welding process

    NASA Astrophysics Data System (ADS)

    Lahdenoja, Olli; Säntti, Tero; Laiho, Mika; Paasio, Ari; Poikonen, Jonne K.

    2015-02-01

    This paper presents the development of methods for real-time fine-tuning of a high power laser welding process of thick steel by using a compact smart camera system. When performing welding in butt-joint configuration, the laser beam's location needs to be adjusted exactly according to the seam line in order to allow the injected energy to be absorbed uniformly into both steel sheets. In this paper, on-line extraction of seam parameters is targeted by taking advantage of a combination of dynamic image intensity compression, image segmentation with a focal-plane processor ASIC, and Hough transform on an associated FPGA. Additional filtering of Hough line candidates based on temporal windowing is further applied to reduce unrealistic frame-to-frame tracking variations. The proposed methods are implemented in Matlab by using image data captured with adaptive integration time. The simulations are performed in a hardware oriented way to allow real-time implementation of the algorithms on the smart camera system.

  18. Real-time image processing of TOF range images using a reconfigurable processor system

    NASA Astrophysics Data System (ADS)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  19. Multispectral simulation environment for modeling low-light-level sensor systems

    NASA Astrophysics Data System (ADS)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  20. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

Top