SAR processing using SHARC signal processing systems
NASA Astrophysics Data System (ADS)
Huxtable, Barton D.; Jackson, Christopher R.; Skaron, Steve A.
1998-09-01
Synthetic aperture radar (SAR) is uniquely suited to help solve the Search and Rescue problem since it can be utilized either day or night and through both dense fog or thick cloud cover. Other papers in this session, and in this session in 1997, describe the various SAR image processing algorithms that are being developed and evaluated within the Search and Rescue Program. All of these approaches to using SAR data require substantial amounts of digital signal processing: for the SAR image formation, and possibly for the subsequent image processing. In recognition of the demanding processing that will be required for an operational Search and Rescue Data Processing System (SARDPS), NASA/Goddard Space Flight Center and NASA/Stennis Space Center are conducting a technology demonstration utilizing SHARC multi-chip modules from Boeing to perform SAR image formation processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoaf, S.; APS Engineering Support Division
A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.
Image detection and compression for memory efficient system analysis
NASA Astrophysics Data System (ADS)
Bayraktar, Mustafa
2015-02-01
The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.
Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.
Ozaki, Nobuyuki
2002-07-01
This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.
Fast Image Subtraction Using Multi-cores and GPUs
NASA Astrophysics Data System (ADS)
Hartung, Steven; Shukla, H.
2013-01-01
Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.
Image-Processing Software For A Hypercube Computer
NASA Technical Reports Server (NTRS)
Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.
1992-01-01
Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.
Pre-Processes for Urban Areas Detection in SAR Images
NASA Astrophysics Data System (ADS)
Altay Açar, S.; Bayır, Ş.
2017-11-01
In this study, pre-processes for urban areas detection in synthetic aperture radar (SAR) images are examined. These pre-processes are image smoothing, thresholding and white coloured regions determination. Image smoothing is carried out to remove noises then thresholding is applied to obtain binary image. Finally, candidate urban areas are detected by using white coloured regions determination. All pre-processes are applied by utilizing the developed software. Two different SAR images which are acquired by TerraSAR-X are used in experimental study. Obtained results are shown visually.
scikit-image: image processing in Python.
van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony
2014-01-01
scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.
Study of the urban evolution of Brasilia with the use of LANDSAT data
NASA Technical Reports Server (NTRS)
Deoliveira, M. D. N. (Principal Investigator); Foresti, C.; Niero, M.; Parreiras, E. M. D. F.
1984-01-01
The urban growth of Brasilia within the last ten years is analyzed with special emphasis on the utilization of remote sensing orbital data and automatic image processing. The urban spatial structure and the monitoring of its temporal changes were focused in a whole and dynamic way by the utilization of MSS-LANDSAT images for June 1973, 1978 and 1983. In order to aid data interpretation, a registration algorithm implemented at the Interactive Multispectral Image Analysis System (IMAGE-100) was utilized aiming at the overlap of multitemporal images. The utilization of suitable digital filters, combined with the images overlap, allowed a rapid identification of areas of possible urban growth and oriented the field work. The results obtained permitted an evaluation of the urban growth of Brasilia, taking as reference the proposed stated for the construction of the city.
Radiology utilizing a gas multiwire detector with resolution enhancement
Majewski, Stanislaw; Majewski, Lucasz A.
1999-09-28
This invention relates to a process and apparatus for obtaining filmless, radiological, digital images utilizing a gas multiwire detector. Resolution is enhanced through projection geometry. This invention further relates to imaging systems for X-ray examination of patients or objects, and is particularly suited for mammography.
Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang
2015-04-01
Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision the broad utility of the framework for diverse problems across different length scales and imaging methods.
Saliency-aware food image segmentation for personal dietary assessment using a wearable computer
USDA-ARS?s Scientific Manuscript database
Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing h...
Tse computers. [ultrahigh speed optical processing for two dimensional binary image
NASA Technical Reports Server (NTRS)
Schaefer, D. H.; Strong, J. P., III
1977-01-01
An ultra-high-speed computer that utilizes binary images as its basic computational entity is being developed. The basic logic components perform thousands of operations simultaneously. Technologies of the fiber optics, display, thin film, and semiconductor industries are being utilized in the building of the hardware.
Fitch, J.P.
1999-07-06
An endoscope is disclosed which reduces the volume needed by the imaging part, maintains resolution of a wide diameter optical system, while increasing tool access, and allows stereographic or interferometric processing for depth and perspective information/visualization. Because the endoscope decreases the volume consumed by imaging optics such allows a larger fraction of the volume to be used for non-imaging tools, which allows smaller incisions in surgical and diagnostic medical applications thus produces less trauma to the patient or allows access to smaller volumes than is possible with larger instruments. The endoscope utilizes fiber optic light pipes in an outer layer for illumination, a multi-pupil imaging system in an inner annulus, and an access channel for other tools in the center. The endoscope is amenable to implementation as a flexible scope, and thus increases it's utility. Because the endoscope uses a multi-aperture pupil, it can also be utilized as an optical array, allowing stereographic and interferometric processing. 7 figs.
Fitch, Joseph P.
1999-07-06
An endoscope which reduces the volume needed by the imaging part thereof, maintains resolution of a wide diameter optical system, while increasing tool access, and allows stereographic or interferometric processing for depth and perspective information/visualization. Because the endoscope decreases the volume consumed by imaging optics such allows a larger fraction of the volume to be used for non-imaging tools, which allows smaller incisions in surgical and diagnostic medical applications thus produces less trauma to the patient or allows access to smaller volumes than is possible with larger instruments. The endoscope utilizes fiber optic light pipes in an outer layer for illumination, a multi-pupil imaging system in an inner annulus, and an access channel for other tools in the center. The endoscope is amenable to implementation as a flexible scope, and thus increases the utility thereof. Because the endoscope uses a multi-aperture pupil, it can also be utilized as an optical array, allowing stereographic and interferometric processing.
Application of LANDSAT data to the study of urban development in Brasilia
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Deoliveira, M. D. L. N.; Foresti, C.; Niero, M.; Parreira, E. M. D. M. F.
1984-01-01
The urban growth of Brasilia within the last ten years is analyzed with special emphasis on the utilization of remote sensing orbital data and automatic image processing. The urban spatial structure and the monitoring of its temporal changes were examined in a whole and dynamic way by the utilization of MSS-LANDSAT images for June (1973, 1978 and 1983). In order to aid data interpretation, a registration algorithm implemented in the Interactive Multispectral Image Analysis System (IMAGE-100) was utilized aiming at the overlap of multitemporal images. The utilization of suitable digital filters, combined with the images overlap, allowed a rapid identification of areas of possible urban growth and oriented the field work. The results obtained in this work permitted an evaluation of the urban growth of Brasilia, taking as reference the proposal stated for the construction of the city in the Pilot Plan elaborated by Lucio Costa.
Combining Digital Watermarking and Fingerprinting Techniques to Identify Copyrights for Color Images
Hsieh, Shang-Lin; Chen, Chun-Che; Shen, Wen-Shan
2014-01-01
This paper presents a copyright identification scheme for color images that takes advantage of the complementary nature of watermarking and fingerprinting. It utilizes an authentication logo and the extracted features of the host image to generate a fingerprint, which is then stored in a database and also embedded in the host image to produce a watermarked image. When a dispute over the copyright of a suspect image occurs, the image is first processed by watermarking. If the watermark can be retrieved from the suspect image, the copyright can then be confirmed; otherwise, the watermark then serves as the fingerprint and is processed by fingerprinting. If a match in the fingerprint database is found, then the suspect image will be considered a duplicated one. Because the proposed scheme utilizes both watermarking and fingerprinting, it is more robust than those that only adopt watermarking, and it can also obtain the preliminary result more quickly than those that only utilize fingerprinting. The experimental results show that when the watermarked image suffers slight attacks, watermarking alone is enough to identify the copyright. The results also show that when the watermarked image suffers heavy attacks that render watermarking incompetent, fingerprinting can successfully identify the copyright, hence demonstrating the effectiveness of the proposed scheme. PMID:25114966
Automatic building identification under bomb damage conditions
NASA Astrophysics Data System (ADS)
Woodley, Robert; Noll, Warren; Barker, Joseph; Wunsch, Donald C., II
2009-05-01
Given the vast amount of image intelligence utilized in support of planning and executing military operations, a passive automated image processing capability for target identification is urgently required. Furthermore, transmitting large image streams from remote locations would quickly use available band width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive Resonance Theory approach to cluster templates of target buildings. The results show that the network successfully classifies targets from non-targets in a virtual test bed environment.
scikit-image: image processing in Python
Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony
2014-01-01
scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921
Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images
NASA Technical Reports Server (NTRS)
Sams, Clarence F.
2016-01-01
The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.
Automated inspection of hot steel slabs
Martin, R.J.
1985-12-24
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes. 5 figs.
Automated inspection of hot steel slabs
Martin, Ronald J.
1985-01-01
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes.
Electronic Photography at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Holm, Jack; Judge, Nancianne
1995-01-01
An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.
Effect of an imaging-based streamlined electronic healthcare process on quality and costs.
Bui, Alex A T; Taira, Ricky K; Goldman, Dana; Dionisio, John David N; Aberle, Denise R; El-Saden, Suzie; Sayre, James; Rice, Thomas; Kangarloo, Hooshang
2004-01-01
A streamlined process of care supported by technology and imaging may be effective in managing the overall healthcare process and costs. This study examined the effect of an imaging-based electronic process of care on costs and rates of hospitalization, emergency room (ER) visits, specialist diagnostic referrals, and patient satisfaction. A healthcare process was implemented for an employer group, highlighting improved patient access to primary care plus routine use of imaging and teleconsultation with diagnostic specialists. An electronic infrastructure supported patient access to physicians and communication among healthcare providers. The employer group, a self-insured company, manages a healthcare plan for its employees and their dependents: 4,072 employees were enrolled in the test group, and 7,639 in the control group. Outcome measures for expenses and frequency of hospitalizations, ER visits, traditional specialist referrals, primary care visits, and imaging utilization rates were measured using claims data over 1 year. Homogeneity tests of proportions were performed with a chi-square statistic, mean differences were tested by two-sample t-tests. Patient satisfaction with access to healthcare was gauged using results from an independent firm. Overall per member/per month costs post-implementation were lower in the enrolled population (126 dollars vs 160 dollars), even though occurrence of chronic/expensive diseases was higher in the enrolled group (18.8% vs 12.2%). Lower per member/per month costs were seen for inpatient (33.29 dollars vs 35.59 dollars); specialist referrals (21.36 dollars vs 26.84 dollars); and ER visits (3.68 dollars vs 5.22 dollars). Moreover, the utilization rate for hospital admissions, ER visits, and traditional specialist referrals were significantly lower in the enrolled group, although primary care and imaging utilization were higher. Comparison to similar employer groups showed that the company's costs were lower than national averages (119.24 dollars vs 146.32 dollars), indicating that the observed result was not attributable to normalization effects. Patient satisfaction with access to healthcare ranked in the top 21st percentile. A streamlined healthcare process supported by technology resulted in higher patient satisfaction and cost savings despite improved access to primary care and higher utilization of imaging.
Image Understanding Architecture
1991-09-01
architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers
Photogrammetry Toolbox Reference Manual
NASA Technical Reports Server (NTRS)
Liu, Tianshu; Burner, Alpheus W.
2014-01-01
Specialized photogrammetric and image processing MATLAB functions useful for wind tunnel and other ground-based testing of aerospace structures are described. These functions include single view and multi-view photogrammetric solutions, basic image processing to determine image coordinates, 2D and 3D coordinate transformations and least squares solutions, spatial and radiometric camera calibration, epipolar relations, and various supporting utility functions.
Span graphics display utilities handbook, first edition
NASA Technical Reports Server (NTRS)
Gallagher, D. L.; Green, J. L.; Newman, R.
1985-01-01
The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.
ERIC Educational Resources Information Center
Gordon, Roger L., Ed.
This guide to multi-image program production for practitioners describes the process from the beginning stages through final presentation, examines historical perspectives, theory, and research in multi-image, and provides examples of successful utilization. Ten chapters focus on the following topics: (1) definition of multi-image field and…
Greenwood, Taylor J; Lopez-Costa, Rodrigo I; Rhoades, Patrick D; Ramírez-Giraldo, Juan C; Starr, Matthew; Street, Mandie; Duncan, James; McKinstry, Robert C
2015-01-01
The marked increase in radiation exposure from medical imaging, especially in children, has caused considerable alarm and spurred efforts to preserve the benefits but reduce the risks of imaging. Applying the principles of the Image Gently campaign, data-driven process and quality improvement techniques such as process mapping and flowcharting, cause-and-effect diagrams, Pareto analysis, statistical process control (control charts), failure mode and effects analysis, "lean" or Six Sigma methodology, and closed feedback loops led to a multiyear program that has reduced overall computed tomographic (CT) examination volume by more than fourfold and concurrently decreased radiation exposure per CT study without compromising diagnostic utility. This systematic approach involving education, streamlining access to magnetic resonance imaging and ultrasonography, auditing with comparison with benchmarks, applying modern CT technology, and revising CT protocols has led to a more than twofold reduction in CT radiation exposure between 2005 and 2012 for patients at the authors' institution while maintaining diagnostic utility. (©)RSNA, 2015.
Processing Images of Craters for Spacecraft Navigation
NASA Technical Reports Server (NTRS)
Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.
2009-01-01
A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.
Electrophoresis gel image processing and analysis using the KODAK 1D software.
Pizzonia, J
2001-06-01
The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.
Application of Neutron Tomography in Culture Heritage research.
Mongy, T
2014-02-01
Neutron Tomography (NT) investigation of Culture Heritages (CH) is an efficient tool for understanding the culture of ancient civilizations. Neutron imaging (NI) is a-state-of-the-art non-destructive tool in the area of CH and plays an important role in the modern archeology. The NI technology can be widely utilized in the field of elemental analysis. At Egypt Second Research Reactor (ETRR-2), a collimated Neutron Radiography (NR) beam is employed for neutron imaging purposes. A digital CCD camera is utilized for recording the beam attenuation in the sample. This helps for the detection of hidden objects and characterization of material properties. Research activity can be extended to use computer software for quantitative neutron measurement. Development of image processing algorithms can be used to obtain high quality images. In this work, full description of ETRR-2 was introduced with up to date neutron imaging system as well. Tomographic investigation of a clay forged artifact represents CH object was studied by neutron imaging methods in order to obtain some hidden information and highlight some attractive quantitative measurements. Computer software was used for imaging processing and enhancement. Also the Astra Image 3.0 Pro software was employed for high precise measurements and imaging enhancement using advanced algorithms. This work increased the effective utilization of the ETRR-2 Neutron Radiography/Tomography (NR/T) technique in Culture Heritages activities. © 2013 Elsevier Ltd. All rights reserved.
Moridis, George J.; Oldenburg, Curtis M.
2001-01-01
Disclosed are processes for monitoring and control of underground contamination, which involve the application of ferrofluids. Two broad uses of ferrofluids are described: (1) to control liquid movement by the application of strong external magnetic fields; and (2) to image liquids by standard geophysical methods.
Use of laser range finders and range image analysis in automated assembly tasks
NASA Technical Reports Server (NTRS)
Alvertos, Nicolas; Dcunha, Ivan
1990-01-01
A proposition to study the effect of filtering processes on range images and to evaluate the performance of two different laser range mappers is made. Median filtering was utilized to remove noise from the range images. First and second order derivatives are then utilized to locate the similarities and dissimilarities between the processed and the original images. Range depth information is converted into spatial coordinates, and a set of coefficients which describe 3-D objects is generated using the algorithm developed in the second phase of this research. Range images of spheres and cylinders are used for experimental purposes. An algorithm was developed to compare the performance of two different laser range mappers based upon the range depth information of surfaces generated by each of the mappers. Furthermore, an approach based on 2-D analytic geometry is also proposed which serves as a basis for the recognition of regular 3-D geometric objects.
Targeting Cell Surface Proteins in Molecular Photoacoustic Imaging to Detect Ovarian Cancer Early
2013-07-01
biology, nanotechnology, and imaging technology, molecular imaging utilizes specific probes as contrast agents to visualize cellular processes at the...This reagent was covalently coupled to the oligosaccharides attached to polypeptide side-chains of extracellular membrane proteins on living cells...website. The normal tissue gene expression profile dataset was modified and processed as described by Fang (8) and mean intensities and standard
NASA Astrophysics Data System (ADS)
Dostal, P.; Krasula, L.; Klima, M.
2012-06-01
Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.
Search systems and computer-implemented search methods
Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.
2017-03-07
Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.
Search systems and computer-implemented search methods
Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.
2015-12-22
Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.
Freyer, Marcus; Ale, Angelique; Schulz, Ralf B; Zientkowska, Marta; Ntziachristos, Vasilis; Englmeier, Karl-Hans
2010-01-01
The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.
Autofluorescence detection and imaging of bladder cancer realized through a cystoscope
Demos, Stavros G [Livermore, CA; deVere White, Ralph W [Sacramento, CA
2007-08-14
Near infrared imaging using elastic light scattering and tissue autofluorescence and utilizing interior examination techniques and equipment are explored for medical applications. The approach involves imaging using cross-polarized elastic light scattering and/or tissue autofluorescence in the Near Infra-Red (NIR) coupled with image processing and inter-image operations to differentiate human tissue components.
Novel medical image enhancement algorithms
NASA Astrophysics Data System (ADS)
Agaian, Sos; McClendon, Stephen A.
2010-01-01
In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.
NASA Technical Reports Server (NTRS)
1982-01-01
A gallery of what might be called the ""Best of HCMM'' imagery is presented. These 100 images, consisting mainly of Day-VIS, Day-IR, and Night-IR scenes plus a few thermal inertia images, were selected from the collection accrued in the Missions Utilization Office (Code 902) at the Goddard Space Flight Center. They were selected because of both their pictorial quality and their information or interest content. Nearly all the images are the computer processed and contrast stretched products routinely produced by the image processing facility at GSFC. Several LANDSAT images, special HCMM images made by HCMM investigators, and maps round out the input.
Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias
2010-01-01
This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
ELAS: A powerful, general purpose image processing package
NASA Technical Reports Server (NTRS)
Walters, David; Rickman, Douglas
1991-01-01
ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.
Advanced Image Processing for NASA Applications
NASA Technical Reports Server (NTRS)
LeMoign, Jacqueline
2007-01-01
The future of space exploration will involve cooperating fleets of spacecraft or sensor webs geared towards coordinated and optimal observation of Earth Science phenomena. The main advantage of such systems is to utilize multiple viewing angles as well as multiple spatial and spectral resolutions of sensors carried on multiple spacecraft but acting collaboratively as a single system. Within this framework, our research focuses on all areas related to sensing in collaborative environments, which means systems utilizing intracommunicating spatially distributed sensor pods or crafts being deployed to monitor or explore different environments. This talk will describe the general concept of sensing in collaborative environments, will give a brief overview of several technologies developed at NASA Goddard Space Flight Center in this area, and then will concentrate on specific image processing research related to that domain, specifically image registration and image fusion.
a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications
NASA Astrophysics Data System (ADS)
Dhanda, A.; Remondino, F.; Santana Quintero, M.
2018-05-01
This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.
NASA Astrophysics Data System (ADS)
Zoratti, Paul K.; Gilbert, R. Kent; Majewski, Ronald; Ference, Jack
1995-12-01
Development of automotive collision warning systems has progressed rapidly over the past several years. A key enabling technology for these systems is millimeter-wave radar. This paper addresses a very critical millimeter-wave radar sensing issue for automotive radar, namely the scattering characteristics of common roadway objects such as vehicles, roadsigns, and bridge overpass structures. The data presented in this paper were collected on ERIM's Fine Resolution Radar Imaging Rotary Platform Facility and processed with ERIM's image processing tools. The value of this approach is that it provides system developers with a 2D radar image from which information about individual point scatterers `within a single target' can be extracted. This information on scattering characteristics will be utilized to refine threat assessment processing algorithms and automotive radar hardware configurations. (1) By evaluating the scattering characteristics identified in the radar image, radar signatures as a function of aspect angle for common roadway objects can be established. These signatures will aid in the refinement of threat assessment processing algorithms. (2) Utilizing ERIM's image manipulation tools, total RCS and RCS as a function of range and azimuth can be extracted from the radar image data. This RCS information will be essential in defining the operational envelope (e.g. dynamic range) within which any radar sensor hardware must be designed.
On Applications of Pyramid Doubly Joint Bilateral Filtering in Dense Disparity Propagation
NASA Astrophysics Data System (ADS)
Abadpour, Arash
2014-06-01
Stereopsis is the basis for numerous tasks in machine vision, robotics, and 3D data acquisition and processing. In order for the subsequent algorithms to function properly, it is important that an affordable method exists that, given a pair of images taken by two cameras, can produce a representation of disparity or depth. This topic has been an active research field since the early days of work on image processing problems and rich literature is available on the topic. Joint bilateral filters have been recently proposed as a more affordable alternative to anisotropic diffusion. This class of image operators utilizes correlation in multiple modalities for purposes such as interpolation and upscaling. In this work, we develop the application of bilateral filtering for converting a large set of sparse disparity measurements into a dense disparity map. This paper develops novel methods for utilizing bilateral filters in joint, pyramid, and doubly joint settings, for purposes including missing value estimation and upscaling. We utilize images of natural and man-made scenes in order to exhibit the possibilities offered through the use of pyramid doubly joint bilateral filtering for stereopsis.
NASA Technical Reports Server (NTRS)
Kumar, P.; Lin, F. Y.; Vaishampayan, V.; Farvardin, N.
1986-01-01
A complete documentation of the software developed in the Communication and Signal Processing Laboratory (CSPL) during the period of July 1985 to March 1986 is provided. Utility programs and subroutines that were developed for a user-friendly image and speech processing environment are described. Additional programs for data compression of image and speech type signals are included. Also, programs for the zero-memory and block transform quantization in the presence of channel noise are described. Finally, several routines for simulating the perfromance of image compression algorithms are included.
Acoustic emission linear pulse holography
Collins, H.D.; Busse, L.J.; Lemon, D.K.
1983-10-25
This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.
The Hubble Legacy Archive: Data Processing in the Era of AstroDrizzle
NASA Astrophysics Data System (ADS)
Strolger, Louis-Gregory; Hubble Legacy Archive Team, The Hubble Source Catalog Team
2015-01-01
The Hubble Legacy Archive (HLA) expands the utility of Hubble Space Telescope wide-field imaging data by providing high-level composite images and source lists, perusable and immediately available online. The latest HLA data release (DR8.0) marks a fundamental change in how these image combinations are produced, using DrizzlePac tools and Astrodrizzle to reduce geometric distortion and provide improved source catalogs for all publicly available data. We detail the HLA data processing and source list schemas, what products are newly updated and available for WFC3 and ACS, and how these data products are further utilized in the production of the Hubble Source Catalog. We also discuss plans for future development, including updates to WFPC2 products and field mosaics.
Error-proofing test system of industrial components based on image processing
NASA Astrophysics Data System (ADS)
Huang, Ying; Huang, Tao
2018-05-01
Due to the improvement of modern industrial level and accuracy, conventional manual test fails to satisfy the test standards of enterprises, so digital image processing technique should be utilized to gather and analyze the information on the surface of industrial components, so as to achieve the purpose of test. To test the installation parts of automotive engine, this paper employs camera to capture the images of the components. After these images are preprocessed including denoising, the image processing algorithm relying on flood fill algorithm is used to test the installation of the components. The results prove that this system has very high test accuracy.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.
2015-01-01
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.
2015-03-15
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Quantitative imaging methods in osteoporosis.
Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G
2016-12-01
Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
FTOOLS: A FITS Data Processing and Analysis Software Package
NASA Astrophysics Data System (ADS)
Blackburn, J. K.
FTOOLS, a highly modular collection of over 110 utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Science Archive Research Center) at NASA's Goddard Space Flight Center. Each utility performs a single simple task such as presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual utilities can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. The collection of utilities provides both generic processing and analysis utilities and utilities specific to high energy astrophysics data sets used for the ASCA, ROSAT, GRO, and XTE missions. A core set of FTOOLS providing support for generic FITS data processing, FITS image analysis and timing analysis can easily be split out of the full software package for users not needing the high energy astrophysics mission utilities. The FTOOLS software package is designed to be both compatible with IRAF and completely stand alone in a UNIX or VMS environment. The user interface is controlled by standard IRAF parameter files. The package is self documenting through the IRAF help facility and a stand alone help task. Software is written in ANSI C and \\fortran to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.
Developing Matlab scripts for image analysis and quality assessment
NASA Astrophysics Data System (ADS)
Vaiopoulos, A. D.
2011-11-01
Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.
Sadowski, Franklin G.; Covington, Steven J.
1987-01-01
Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT highresolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear powerplant emergency at Chernobyl in the Soviet Ukraine. The images demonstrate the unique interpretive capabilities provided by the numerous spectral bands of the Thematic Mapper and the high spatial resolution of the SPOT HRV sensor.
Pre-processing SAR image stream to facilitate compression for transport on bandwidth-limited-link
Rush, Bobby G.; Riley, Robert
2015-09-29
Pre-processing is applied to a raw VideoSAR (or similar near-video rate) product to transform the image frame sequence into a product that resembles more closely the type of product for which conventional video codecs are designed, while sufficiently maintaining utility and visual quality of the product delivered by the codec.
Application of LANDSAT data and digital image processing. [Ruhr Valley, Germany
NASA Technical Reports Server (NTRS)
Bodechtel, J. (Principal Investigator)
1978-01-01
The author has identified the following significant results. Based on LANDSAT 1 and 2 data, applications in the fields of coal mining, lignite exploration, and thematic mapping in geology are demonstrated. The hybrid image processing system, its software, and its utilization for educational purposes is described. A pre-operational European satellite is proposed.
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
Inverse scattering and refraction corrected reflection for breast cancer imaging
NASA Astrophysics Data System (ADS)
Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John
2010-03-01
Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.
Topological anomaly detection performance with multispectral polarimetric imagery
NASA Astrophysics Data System (ADS)
Gartley, M. G.; Basener, W.,
2009-05-01
Polarimetric imaging has demonstrated utility for increasing contrast of manmade targets above natural background clutter. Manual detection of manmade targets in multispectral polarimetric imagery can be challenging and a subjective process for large datasets. Analyst exploitation may be improved utilizing conventional anomaly detection algorithms such as RX. In this paper we examine the performance of a relatively new approach to anomaly detection, which leverages topology theory, applied to spectral polarimetric imagery. Detection results for manmade targets embedded in a complex natural background will be presented for both the RX and Topological Anomaly Detection (TAD) approaches. We will also present detailed results examining detection sensitivities relative to: (1) the number of spectral bands, (2) utilization of Stoke's images versus intensity images, and (3) airborne versus spaceborne measurements.
NASA Astrophysics Data System (ADS)
Saini, Surender Singh; Sardana, Harish Kumar; Pattnaik, Shyam Sundar
2017-06-01
Conventional image editing software in combination with other techniques are not only difficult to apply to an image but also permits a user to perform some basic functions one at a time. However, image processing algorithms and photogrammetric systems are developed in the recent past for real-time pattern recognition applications. A graphical user interface (GUI) is developed which can perform multiple functions simultaneously for the analysis and estimation of geometric distortion in an image with reference to the corresponding distorted image. The GUI measure, record, and visualize the performance metric of X/Y coordinates of one image over the other. The various keys and icons provided in the utility extracts the coordinates of distortion free reference image and the image with geometric distortion. The error between these two corresponding points gives the measure of distortion and also used to evaluate the correction parameters for image distortion. As the GUI interface minimizes human interference in the process of geometric correction, its execution just requires use of icons and keys provided in the utility; this technique gives swift and accurate results as compared to other conventional methods for the measurement of the X/Y coordinates of an image.
Particle sizing in rocket motor studies utilizing hologram image processing
NASA Technical Reports Server (NTRS)
Netzer, David; Powers, John
1987-01-01
A technique of obtaining particle size information from holograms of combustion products is described. The holograms are obtained with a pulsed ruby laser through windows in a combustion chamber. The reconstruction is done with a krypton laser with the real image being viewed through a microscope. The particle size information is measured with a Quantimet 720 image processing system which can discriminate various features and perform measurements of the portions of interest in the image. Various problems that arise in the technique are discussed, especially those that are a consequence of the speckle due to the diffuse illumination used in the recording process.
Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU
NASA Astrophysics Data System (ADS)
Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang
2017-10-01
Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.
An Approach for Stitching Satellite Images in a Bigdata Mapreduce Framework
NASA Astrophysics Data System (ADS)
Sarı, H.; Eken, S.; Sayar, A.
2017-11-01
In this study we present a two-step map/reduce framework to stitch satellite mosaic images. The proposed system enable recognition and extraction of objects whose parts falling in separate satellite mosaic images. However this is a time and resource consuming process. The major aim of the study is improving the performance of the image stitching processes by utilizing big data framework. To realize this, we first convert the images into bitmaps (first mapper) and then String formats in the forms of 255s and 0s (second mapper), and finally, find the best possible matching position of the images by a reduce function.
Retrieval of radiology reports citing critical findings with disease-specific customization.
Lacson, Ronilda; Sugarbaker, Nathanael; Prevedello, Luciano M; Ivan, Ip; Mar, Wendy; Andriole, Katherine P; Khorasani, Ramin
2012-01-01
Communication of critical results from diagnostic procedures between caregivers is a Joint Commission national patient safety goal. Evaluating critical result communication often requires manual analysis of voluminous data, especially when reviewing unstructured textual results of radiologic findings. Information retrieval (IR) tools can facilitate this process by enabling automated retrieval of radiology reports that cite critical imaging findings. However, IR tools that have been developed for one disease or imaging modality often need substantial reconfiguration before they can be utilized for another disease entity. THIS PAPER: 1) describes the process of customizing two Natural Language Processing (NLP) and Information Retrieval/Extraction applications - an open-source toolkit, A Nearly New Information Extraction system (ANNIE); and an application developed in-house, Information for Searching Content with an Ontology-Utilizing Toolkit (iSCOUT) - to illustrate the varying levels of customization required for different disease entities and; 2) evaluates each application's performance in identifying and retrieving radiology reports citing critical imaging findings for three distinct diseases, pulmonary nodule, pneumothorax, and pulmonary embolus. Both applications can be utilized for retrieval. iSCOUT and ANNIE had precision values between 0.90-0.98 and recall values between 0.79 and 0.94. ANNIE had consistently higher precision but required more customization. Understanding the customizations involved in utilizing NLP applications for various diseases will enable users to select the most suitable tool for specific tasks.
Retrieval of Radiology Reports Citing Critical Findings with Disease-Specific Customization
Lacson, Ronilda; Sugarbaker, Nathanael; Prevedello, Luciano M; Ivan, IP; Mar, Wendy; Andriole, Katherine P; Khorasani, Ramin
2012-01-01
Background: Communication of critical results from diagnostic procedures between caregivers is a Joint Commission national patient safety goal. Evaluating critical result communication often requires manual analysis of voluminous data, especially when reviewing unstructured textual results of radiologic findings. Information retrieval (IR) tools can facilitate this process by enabling automated retrieval of radiology reports that cite critical imaging findings. However, IR tools that have been developed for one disease or imaging modality often need substantial reconfiguration before they can be utilized for another disease entity. Purpose: This paper: 1) describes the process of customizing two Natural Language Processing (NLP) and Information Retrieval/Extraction applications – an open-source toolkit, A Nearly New Information Extraction system (ANNIE); and an application developed in-house, Information for Searching Content with an Ontology-Utilizing Toolkit (iSCOUT) – to illustrate the varying levels of customization required for different disease entities and; 2) evaluates each application’s performance in identifying and retrieving radiology reports citing critical imaging findings for three distinct diseases, pulmonary nodule, pneumothorax, and pulmonary embolus. Results: Both applications can be utilized for retrieval. iSCOUT and ANNIE had precision values between 0.90-0.98 and recall values between 0.79 and 0.94. ANNIE had consistently higher precision but required more customization. Conclusion: Understanding the customizations involved in utilizing NLP applications for various diseases will enable users to select the most suitable tool for specific tasks. PMID:22934127
Research on simulated infrared image utility evaluation using deep representation
NASA Astrophysics Data System (ADS)
Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin
2018-01-01
Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.
Applications of process improvement techniques to improve workflow in abdominal imaging.
Tamm, Eric Peter
2016-03-01
Major changes in the management and funding of healthcare are underway that will markedly change the way radiology studies will be reimbursed. The result will be the need to deliver radiology services in a highly efficient manner while maintaining quality. The science of process improvement provides a practical approach to improve the processes utilized in radiology. This article will address in a step-by-step manner how to implement process improvement techniques to improve workflow in abdominal imaging.
An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.
Khanian, Maryam; Feizi, Awat; Davari, Ali
2014-01-01
Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.
Incorporating digital imaging into dental hygiene practice.
Saxe, M J; West, D J
1997-01-01
The objective of this paper is to describe digital imaging technology: available modalities, scientific imaging process, advantages and limitations, and applications to dental hygiene practice. Advances in technology have created innovative imaging modalities for intraoral radiography that eliminate film as the traditional image receptor. Digital imaging generates instantaneous radiographic images on a display monitor following exposure. Advantages include lower patient exposure per image and elimination of film processing. Digital imaging enhances diagnostic capabilities and, therefore, treatment decisions by the oral healthcare provider. Utilization of digital imaging technology for intraoral radiography will advance the practice of dental hygiene. Although spatial resolution is inferior to conventional film, digital imaging provides adequate resolution to diagnose oral diseases. Dental hygienists must evaluate new technologies in radiography to continue providing quality care while reducing patient exposure to ionizing radiation.
NASA Astrophysics Data System (ADS)
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki
2017-02-01
Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.
NASA Technical Reports Server (NTRS)
1975-01-01
Image data processing system (IDAPS) developed to satisfy the image processing requirements of the Skylab S-056 experiment is described. The S-056 experiment was designed to obtain high-resolution photographs of the sun in the far ultraviolet, or soft X-ray, portion of the electromagnetic spectrum. Thirty-five thousand photographs were obtained by the three flights of the program; and, faced with such a massive volume of imagery, the designers of the experiment decided to develop a computer-based system which would reduce the image processing workload. The purpose of the IDAPS User Manual is to give the IDAPS user the necessary information and instructions to effectively utilize the system.
Autonomous characterization of plastic-bonded explosives
NASA Astrophysics Data System (ADS)
Linder, Kim Dalton; DeRego, Paul; Gomez, Antonio; Baumgart, Chris
2006-08-01
Plastic-Bonded Explosives (PBXs) are a newer generation of explosive compositions developed at Los Alamos National Laboratory (LANL). Understanding the micromechanical behavior of these materials is critical. The size of the crystal particles and porosity within the PBX influences their shock sensitivity. Current methods to characterize the prominent structural characteristics include manual examination by scientists and attempts to use commercially available image processing packages. Both methods are time consuming and tedious. LANL personnel, recognizing this as a manually intensive process, have worked with the Kansas City Plant / Kirtland Operations to develop a system which utilizes image processing and pattern recognition techniques to characterize PBX material. System hardware consists of a CCD camera, zoom lens, two-dimensional, motorized stage, and coaxial, cross-polarized light. System integration of this hardware with the custom software is at the core of the machine vision system. Fundamental processing steps involve capturing images from the PBX specimen, and extraction of void, crystal, and binder regions. For crystal extraction, a Quadtree decomposition segmentation technique is employed. Benefits of this system include: (1) reduction of the overall characterization time; (2) a process which is quantifiable and repeatable; (3) utilization of personnel for intelligent review rather than manual processing; and (4) significantly enhanced characterization accuracy.
Use of One Time Pad Algorithm for Bit Plane Security Improvement
NASA Astrophysics Data System (ADS)
Suhardi; Suwilo, Saib; Budhiarti Nababan, Erna
2017-12-01
BPCS (Bit-Plane Complexity Segmentation) which is one of the steganography techniques that utilizes the human vision characteristics that cannot see the change in binary patterns that occur in the image. This technique performs message insertion by making a switch to a high-complexity bit-plane or noise-like regions with bits of secret messages. Bit messages that were previously stored precisely result the message extraction process to be done easily by rearranging a set of previously stored characters in noise-like region in the image. Therefore the secret message becomes easily known by others. In this research, the process of replacing bit plane with message bits is modified by utilizing One Time Pad cryptography technique which aims to increase security in bit plane. In the tests performed, the combination of One Time Pad cryptographic algorithm to the steganography technique of BPCS works well in the insertion of messages into the vessel image, although in insertion into low-dimensional images is poor. The comparison of the original image with the stegoimage looks identical and produces a good quality image with a mean value of PSNR above 30db when using a largedimensional image as the cover messages.
Tools for a Document Image Utility.
ERIC Educational Resources Information Center
Krishnamoorthy, M.; And Others
1993-01-01
Describes a project conducted at Rensselaer Polytechnic Institute (New York) that developed methods for automatically subdividing pages from technical journals into smaller semantic units for transmission, display, and further processing in an electronic environment. Topics discussed include optical scanning and image compression, digital image…
A synoptic description of coal basins via image processing
NASA Technical Reports Server (NTRS)
Farrell, K. W., Jr.; Wherry, D. B.
1978-01-01
An existing image processing system is adapted to describe the geologic attributes of a regional coal basin. This scheme handles a map as if it were a matrix, in contrast to more conventional approaches which represent map information in terms of linked polygons. The utility of the image processing approach is demonstrated by a multiattribute analysis of the Herrin No. 6 coal seam in Illinois. Findings include the location of a resource and estimation of tonnage corresponding to constraints on seam thickness, overburden, and Btu value, which are illustrative of the need for new mining technology.
NASA Technical Reports Server (NTRS)
Browder, Joan A.; May, L. Nelson, Jr.; Rosenthal, Alan; Baumann, Robert H.; Gosselink, James G.
1987-01-01
A stochastic spatial computer model addressing coastal resource problems in Lousiana is being refined and validated using thematic mapper (TM) imagery. The TM images of brackish marsh sites were processed and data were tabulated on spatial parameters from TM images of the salt marsh sites. The Fisheries Image Processing Systems (FIPS) was used to analyze the TM scene. Activities were concentrated on improving the structure of the model and developing a structure and methodology for calibrating the model with spatial-pattern data from the TM imagery.
Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition
NASA Technical Reports Server (NTRS)
Downie, John D.; Tucker, Deanne (Technical Monitor)
1994-01-01
Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.
On-road anomaly detection by multimodal sensor analysis and multimedia processing
NASA Astrophysics Data System (ADS)
Orhan, Fatih; Eren, P. E.
2014-03-01
The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.
An improved non-uniformity correction algorithm and its hardware implementation on FPGA
NASA Astrophysics Data System (ADS)
Rong, Shenghui; Zhou, Huixin; Wen, Zhigang; Qin, Hanlin; Qian, Kun; Cheng, Kuanhong
2017-09-01
The Non-uniformity of Infrared Focal Plane Arrays (IRFPA) severely degrades the infrared image quality. An effective non-uniformity correction (NUC) algorithm is necessary for an IRFPA imaging and application system. However traditional scene-based NUC algorithm suffers the image blurring and artificial ghosting. In addition, few effective hardware platforms have been proposed to implement corresponding NUC algorithms. Thus, this paper proposed an improved neural-network based NUC algorithm by the guided image filter and the projection-based motion detection algorithm. First, the guided image filter is utilized to achieve the accurate desired image to decrease the artificial ghosting. Then a projection-based moving detection algorithm is utilized to determine whether the correction coefficients should be updated or not. In this way the problem of image blurring can be overcome. At last, an FPGA-based hardware design is introduced to realize the proposed NUC algorithm. A real and a simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. Experimental results indicated that the proposed NUC algorithm can effectively eliminate the fix pattern noise with less image blurring and artificial ghosting. The proposed hardware design takes less logic elements in FPGA and spends less clock cycles to process one frame of image.
Recent advances in high-performance fluorescent and bioluminescent RNA imaging probes.
Xia, Yuqiong; Zhang, Ruili; Wang, Zhongliang; Tian, Jie; Chen, Xiaoyuan
2017-05-22
RNA plays an important role in life processes. Imaging of messenger RNAs (mRNAs) and micro-RNAs (miRNAs) not only allows us to learn the formation and transcription of mRNAs and the biogenesis of miRNAs involved in various life processes, but also helps in detecting cancer. High-performance RNA imaging probes greatly expand our view of life processes and enhance the cancer detection accuracy. In this review, we summarize the state-of-the-art high-performance RNA imaging probes, including exogenous probes that can image RNA sequences with special modification and endogeneous probes that can directly image endogenous RNAs without special treatment. For each probe, we review its structure and imaging principle in detail. Finally, we summarize the application of mRNA and miRNA imaging probes in studying life processes as well as in detecting cancer. By correlating the structures and principles of various probes with their practical uses, we compare different RNA imaging probes and offer guidance for better utilization of the current imaging probes and the future design of higher-performance RNA imaging probes.
Some utilities to help produce Rich Text Files from Stata.
Gillman, Matthew S
Producing RTF files from Stata can be difficult and somewhat cryptic. Utilities are introduced to simplify this process; one builds up a table row-by-row, another inserts a PNG image file into an RTF document, and the others start and finish the RTF document.
Wavelet imaging cleaning method for atmospheric Cherenkov telescopes
NASA Astrophysics Data System (ADS)
Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.
2002-07-01
We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.
High-contrast multilayer imaging of biological organisms through dark-field digital refocusing.
Faridian, Ahmad; Pedrini, Giancarlo; Osten, Wolfgang
2013-08-01
We have developed an imaging system to extract high contrast images from different layers of biological organisms. Utilizing a digital holographic approach, the system works without scanning through layers of the specimen. In dark-field illumination, scattered light has the main contribution in image formation, but in the case of coherent illumination, this creates a strong speckle noise that reduces the image quality. To remove this restriction, the specimen has been illuminated with various speckle-fields and a hologram has been recorded for each speckle-field. Each hologram has been analyzed separately and the corresponding intensity image has been reconstructed. The final image has been derived by averaging over the reconstructed images. A correlation approach has been utilized to determine the number of speckle-fields required to achieve a desired contrast and image quality. The reconstructed intensity images in different object layers are shown for different sea urchin larvae. Two multimedia files are attached to illustrate the process of digital focusing.
Sabbatini, Amber K; Merck, Lisa H; Froemming, Adam T; Vaughan, William; Brown, Michael D; Hess, Erik P; Applegate, Kimberly E; Comfere, Nneka I
2015-12-01
Patient-centered emergency diagnostic imaging relies on efficient communication and multispecialty care coordination to ensure optimal imaging utilization. The construct of the emergency diagnostic imaging care coordination cycle with three main phases (pretest, test, and posttest) provides a useful framework to evaluate care coordination in patient-centered emergency diagnostic imaging. This article summarizes findings reached during the patient-centered outcomes session of the 2015 Academic Emergency Medicine consensus conference "Diagnostic Imaging in the Emergency Department: A Research Agenda to Optimize Utilization." The primary objective was to develop a research agenda focused on 1) defining component parts of the emergency diagnostic imaging care coordination process, 2) identifying gaps in communication that affect emergency diagnostic imaging, and 3) defining optimal methods of communication and multidisciplinary care coordination that ensure patient-centered emergency diagnostic imaging. Prioritized research questions provided the framework to define a research agenda for multidisciplinary care coordination in emergency diagnostic imaging. © 2015 by the Society for Academic Emergency Medicine.
Generation and assessment of turntable SAR data for the support of ATR development
NASA Astrophysics Data System (ADS)
Cohen, Marvin N.; Showman, Gregory A.; Sangston, K. James; Sylvester, Vincent B.; Gostin, Lamar; Scheer, C. Ruby
1998-10-01
Inverse synthetic aperture radar (ISAR) imaging on a turntable-tower test range permits convenient generation of high resolution two-dimensional images of radar targets under controlled conditions for testing SAR image processing and for supporting automatic target recognition (ATR) algorithm development. However, turntable ISAR images are often obtained under near-field geometries and hence may suffer geometric distortions not present in airborne SAR images. In this paper, turntable data collected at Georgia Tech's Electromagnetic Test Facility are used to begin to assess the utility of two- dimensional ISAR imaging algorithms in forming images to support ATR development. The imaging algorithms considered include a simple 2D discrete Fourier transform (DFT), a 2-D DFT with geometric correction based on image domain resampling, and a computationally-intensive geometric matched filter solution. Images formed with the various algorithms are used to develop ATR templates, which are then compared with an eye toward utilization in an ATR algorithm.
Paskevich, Valerie F.
1992-01-01
The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.
An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis
NASA Astrophysics Data System (ADS)
Kim, Yongmin; Alexander, Thomas
1986-06-01
In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.
Critical object recognition in millimeter-wave images with robustness to rotation and scale.
Mohammadzade, Hoda; Ghojogh, Benyamin; Faezi, Sina; Shabany, Mahdi
2017-06-01
Locating critical objects is crucial in various security applications and industries. For example, in security applications, such as in airports, these objects might be hidden or covered under shields or secret sheaths. Millimeter-wave images can be utilized to discover and recognize the critical objects out of the hidden cases without any health risk due to their non-ionizing features. However, millimeter-wave images usually have waves in and around the detected objects, making object recognition difficult. Thus, regular image processing and classification methods cannot be used for these images and additional pre-processings and classification methods should be introduced. This paper proposes a novel pre-processing method for canceling rotation and scale using principal component analysis. In addition, a two-layer classification method is introduced and utilized for recognition. Moreover, a large dataset of millimeter-wave images is collected and created for experiments. Experimental results show that a typical classification method such as support vector machines can recognize 45.5% of a type of critical objects at 34.2% false alarm rate (FAR), which is a drastically poor recognition. The same method within the proposed recognition framework achieves 92.9% recognition rate at 0.43% FAR, which indicates a highly significant improvement. The significant contribution of this work is to introduce a new method for analyzing millimeter-wave images based on machine vision and learning approaches, which is not yet widely noted in the field of millimeter-wave image analysis.
UWGSP7: a real-time optical imaging workstation
NASA Astrophysics Data System (ADS)
Bush, John E.; Kim, Yongmin; Pennington, Stan D.; Alleman, Andrew P.
1995-04-01
With the development of UWGSP7, the University of Washington Image Computing Systems Laboratory has a real-time workstation for continuous-wave (cw) optical reflectance imaging. Recent discoveries in optical science and imaging research have suggested potential practical use of the technology as a medical imaging modality and identified the need for a machine to support these applications in real time. The UWGSP7 system was developed to provide researchers with a high-performance, versatile tool for use in optical imaging experiments with the eventual goal of bringing the technology into clinical use. One of several major applications of cw optical reflectance imaging is tumor imaging which uses a light-absorbing dye that preferentially sequesters in tumor tissue. This property could be used to locate tumors and to identify tumor margins intraoperatively. Cw optical reflectance imaging consists of illumination of a target with a band-limited light source and monitoring the light transmitted by or reflected from the target. While continuously illuminating the target, a control image is acquired and stored. A dye is injected into a subject and a sequence of data images are acquired and processed. The data images are aligned with the control image and then subtracted to obtain a signal representing the change in optical reflectance over time. This signal can be enhanced by digital image processing and displayed in pseudo-color. This type of emerging imaging technique requires a computer system that is versatile and adaptable. The UWGSP7 utilizes a VESA local bus PC as a host computer running the Windows NT operating system and includes ICSL developed add-on boards for image acquisition and processing. The image acquisition board is used to digitize and format the analog signal from the input device into digital frames and to the average frames into images. To accommodate different input devices, the camera interface circuitry is designed in a small mezzanine board that supports the RS-170 standard. The image acquisition board is connected to the image- processing board using a direct connect port which provides a 66 Mbytes/s channel independent of the system bus. The image processing board utilizes the Texas Instruments TMS320C80 Multimedia Video Processor chip. This chip is capable of 2 billion operations per second providing the UWGSP7 with the capability to perform real-time image processing functions like median filtering, convolution and contrast enhancement. This processing power allows interactive analysis of the experiments as compared to current practice of off-line processing and analysis. Due to its flexibility and programmability, the UWGSP7 can be adapted into various research needs in intraoperative optical imaging.
Many-core computing for space-based stereoscopic imaging
NASA Astrophysics Data System (ADS)
McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry
The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.
Redman, Joseph S; Natarajan, Yamini; Hou, Jason K; Wang, Jingqi; Hanif, Muzammil; Feng, Hua; Kramer, Jennifer R; Desiderio, Roxanne; Xu, Hua; El-Serag, Hashem B; Kanwal, Fasiha
2017-10-01
Natural language processing is a powerful technique of machine learning capable of maximizing data extraction from complex electronic medical records. We utilized this technique to develop algorithms capable of "reading" full-text radiology reports to accurately identify the presence of fatty liver disease. Abdominal ultrasound, computerized tomography, and magnetic resonance imaging reports were retrieved from the Veterans Affairs Corporate Data Warehouse from a random national sample of 652 patients. Radiographic fatty liver disease was determined by manual review by two physicians and verified with an expert radiologist. A split validation method was utilized for algorithm development. For all three imaging modalities, the algorithms could identify fatty liver disease with >90% recall and precision, with F-measures >90%. These algorithms could be used to rapidly screen patient records to establish a large cohort to facilitate epidemiological and clinical studies and examine the clinic course and outcomes of patients with radiographic hepatic steatosis.
NASA Technical Reports Server (NTRS)
Blacksberg, Jordana (Inventor); Hoenk, Michael Eugene (Inventor); Nikzad, Shouleh (Inventor)
2010-01-01
A method is provided for growing a back surface contact on an imaging detector used in conjunction with back illumination. In operation, an imaging detector is provided. Additionally, a back surface contact (e.g. a delta-doped layer, etc.) is grown on the imaging detector utilizing a process that is performed at a temperature less than 450 degrees Celsius.
Edge enhancement of color images using a digital micromirror device.
Di Martino, J Matías; Flores, Jorge L; Ayubi, Gastón A; Alonso, Julia R; Fernández, Ariel; Ferrari, José A
2012-06-01
A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.
Enhanced image capture through fusion
NASA Technical Reports Server (NTRS)
Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.
1993-01-01
Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.
Thin client (web browser)-based collaboration for medical imaging and web-enabled data.
Le, Tuong Huu; Malhi, Nadeem
2002-01-01
Utilizing thin client software and open source server technology, a collaborative architecture was implemented allowing for sharing of Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images with real-time markup. Using the Web browser as a thin client integrated with standards-based components, such as DHTML (dynamic hypertext markup language), JavaScript, and Java, collaboration was achieved through a Web server/proxy server combination utilizing Java Servlets and Java Server Pages. A typical collaborative session involved the driver, who directed the navigation of the other collaborators, the passengers, and provided collaborative markups of medical and nonmedical images. The majority of processing was performed on the server side, allowing for the client to remain thin and more accessible.
Some utilities to help produce Rich Text Files from Stata
Gillman, Matthew S.
2018-01-01
Producing RTF files from Stata can be difficult and somewhat cryptic. Utilities are introduced to simplify this process; one builds up a table row-by-row, another inserts a PNG image file into an RTF document, and the others start and finish the RTF document. PMID:29731697
Comparison of laser Doppler and laser speckle contrast imaging using a concurrent processing system
NASA Astrophysics Data System (ADS)
Sun, Shen; Hayes-Gill, Barrie R.; He, Diwei; Zhu, Yiqun; Huynh, Nam T.; Morgan, Stephen P.
2016-08-01
Full field laser Doppler imaging (LDI) and single exposure laser speckle contrast imaging (LSCI) are directly compared using a novel instrument which can concurrently image blood flow using both LDI and LSCI signal processing. Incorporating a commercial CMOS camera chip and a field programmable gate array (FPGA) the flow images of LDI and the contrast maps of LSCI are simultaneously processed by utilizing the same detected optical signals. The comparison was carried out by imaging a rotating diffuser. LDI has a linear response to the velocity. In contrast, LSCI is exposure time dependent and does not provide a linear response in the presence of static speckle. It is also demonstrated that the relationship between LDI and LSCI can be related through a power law which depends on the exposure time of LSCI.
NASA Astrophysics Data System (ADS)
Wang, N.; Yang, R.
2018-04-01
Chinese high -resolution (HR) remote sensing satellites have made huge leap in the past decade. Commercial satellite datasets, such as GF-1, GF-2 and ZY-3 images, the panchromatic images (PAN) resolution of them are 2 m, 1 m and 2.1 m and the multispectral images (MS) resolution are 8 m, 4 m, 5.8 m respectively have been emerged in recent years. Chinese HR satellite imagery has been free downloaded for public welfare purposes using. Local government began to employ more professional technician to improve traditional land management technology. This paper focused on analysing the actual requirements of the applications in government land law enforcement in Guangxi Autonomous Region. 66 counties in Guangxi Autonomous Region were selected for illegal land utilization spot extraction with fusion Chinese HR images. The procedure contains: A. Defines illegal land utilization spot type. B. Data collection, GF-1, GF-2, and ZY-3 datasets were acquired in the first half year of 2016 and other auxiliary data were collected in 2015. C. Batch process, HR images were collected for batch preprocessing through ENVI/IDL tool. D. Illegal land utilization spot extraction by visual interpretation. E. Obtaining attribute data with ArcGIS Geoprocessor (GP) model. F. Thematic mapping and surveying. Through analysing 42 counties results, law enforcement officials found 1092 illegal land using spots and 16 suspicious illegal mining spots. The results show that Chinese HR satellite images have great potential for feature information extraction and the processing procedure appears robust.
Zhou, Ying-Qun; Chen, Shi-Lin; Zhao, Run-Huai; Xie, Cai-Xiang; Li, Ying
2008-04-01
Sustainable utilization and bio-diversity protection of traditional Chinese medicine (TCM) have been a hotspot of the TCM study at present, in which the choice of appropriate method is one of the primary problems confronted. This paper described the technical system, equipment and image processing of low altitude remote sensing, and analyzed its future application in Chinese herb medicinal sustainable utilization.
Yang, Lei; Lu, Jun; Dai, Ming; Ren, Li-Jie; Liu, Wei-Zong; Li, Zhen-Zhou; Gong, Xue-Hao
2016-10-06
An ultrasonic image speckle noise removal method by using total least squares model is proposed and applied onto images of cardiovascular structures such as the carotid artery. On the basis of the least squares principle, the related principle of minimum square method is applied to cardiac ultrasound image speckle noise removal process to establish the model of total least squares, orthogonal projection transformation processing is utilized for the output of the model, and the denoising processing for the cardiac ultrasound image speckle noise is realized. Experimental results show that the improved algorithm can greatly improve the resolution of the image, and meet the needs of clinical medical diagnosis and treatment of the cardiovascular system for the head and neck. Furthermore, the success in imaging of carotid arteries has strong implications in neurological complications such as stroke.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadowski, F.G.; Covington, S.J.
1987-01-01
Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT high-resolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear power plant emergency at Chernobyl in the Soviet Ukraine. The results of the data processing and analysis illustrate the spectral and spatial capabilities of the two sensor systems and provide information about the severity and duration of the events occurring at the power plant site.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-06-08
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-01-01
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383
Concurrent Image Processing Executive (CIPE). Volume 1: Design overview
NASA Technical Reports Server (NTRS)
Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.
1990-01-01
The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.
Liang, Yicheng; Peng, Hao
2015-02-07
Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.
Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM
NASA Technical Reports Server (NTRS)
Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip
2017-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.
Pandey, Anil Kumar; Saroha, Kartik; Sharma, Param Dev; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
In this study, we have developed a simple image processing application in MATLAB that uses suprathreshold stochastic resonance (SSR) and helps the user to visualize abdominopelvic tumor on the exported prediuretic positron emission tomography/computed tomography (PET/CT) images. A brainstorming session was conducted for requirement analysis for the program. It was decided that program should load the screen captured PET/CT images and then produces output images in a window with a slider control that should enable the user to view the best image that visualizes the tumor, if present. The program was implemented on personal computer using Microsoft Windows and MATLAB R2013b. The program has option for the user to select the input image. For the selected image, it displays output images generated using SSR in a separate window having a slider control. The slider control enables the user to view images and select one which seems to provide the best visualization of the area(s) of interest. The developed application enables the user to select, process, and view output images in the process of utilizing SSR to detect the presence of abdominopelvic tumor on prediuretic PET/CT image.
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
Ethical implications of digital images for teaching and learning purposes: an integrative review.
Kornhaber, Rachel; Betihavas, Vasiliki; Baber, Rodney J
2015-01-01
Digital photography has simplified the process of capturing and utilizing medical images. The process of taking high-quality digital photographs has been recognized as efficient, timely, and cost-effective. In particular, the evolution of smartphone and comparable technologies has become a vital component in teaching and learning of health care professionals. However, ethical standards in relation to digital photography for teaching and learning have not always been of the highest standard. The inappropriate utilization of digital images within the health care setting has the capacity to compromise patient confidentiality and increase the risk of litigation. Therefore, the aim of this review was to investigate the literature concerning the ethical implications for health professionals utilizing digital photography for teaching and learning. A literature search was conducted utilizing five electronic databases, PubMed, Embase (Excerpta Medica Database), Cumulative Index to Nursing and Allied Health Literature, Educational Resources Information Center, and Scopus, limited to English language. Studies that endeavored to evaluate the ethical implications of digital photography for teaching and learning purposes in the health care setting were included. The search strategy identified 514 papers of which nine were retrieved for full review. Four papers were excluded based on the inclusion criteria, leaving five papers for final analysis. Three key themes were developed: knowledge deficit, consent and beyond, and standards driving scope of practice. The assimilation of evidence in this review suggests that there is value for health professionals utilizing digital photography for teaching purposes in health education. However, there is limited understanding of the process of obtaining and storage and use of such mediums for teaching purposes. Disparity was also highlighted related to policy and guideline identification and development in clinical practice. Therefore, the implementation of policy to guide practice requires further research.
Digital image transformation and rectification of spacecraft and radar images
NASA Technical Reports Server (NTRS)
Wu, S. S. C.
1985-01-01
The application of digital processing techniques to spacecraft television pictures and radar images is discussed. The use of digital rectification to produce contour maps from spacecraft pictures is described; images with azimuth and elevation angles are converted into point-perspective frame pictures. The digital correction of the slant angle of radar images to ground scale is examined. The development of orthophoto and stereoscopic shaded relief maps from digital terrain and digital image data is analyzed. Digital image transformations and rectifications are utilized on Viking Orbiter and Lander pictures of Mars.
Building an outpatient imaging center: A case study at genesis healthcare system, part 2.
Yanci, Jim
2006-01-01
In the second of 2 parts, this article will focus on process improvement projects utilizing a case study at Genesis HealthCare System located in Zanesville, OH. Operational efficiency is a key step in developing a freestanding diagnostic imaging center. The process improvement projects began with an Expert Improvement Session (EIS) on the scheduling process. An EIS session is a facilitated meeting that can last anywhere from 3 hours to 2 days. Its intention is to take a group of people involved with the problem or operational process and work to understand current failures or breakdowns in the process. Recommendations are jointly developed to overcome any current deficiencies, and a work plan is structured to create ownership over the changes. A total of 11 EIS sessions occurred over the course of this project, covering 5 sections: Scheduling/telephone call process, Pre-registration, Verification/pre-certification, MRI throughput, CT throughput. Following is a single example of a project focused on the process improvement efforts. All of the process improvement projects utilized a quasi methodology of "DMAIC" (Define, Measure, Analyze, Improve, and Control).
Point target detection utilizing super-resolution strategy for infrared scanning oversampling system
NASA Astrophysics Data System (ADS)
Wang, Longguang; Lin, Zaiping; Deng, Xinpu; An, Wei
2017-11-01
To improve the resolution of remote sensing infrared images, infrared scanning oversampling system is employed with information amount quadrupled, which contributes to the target detection. Generally the image data from double-line detector of infrared scanning oversampling system is shuffled to a whole oversampled image to be post-processed, whereas the aliasing between neighboring pixels leads to image degradation with a great impact on target detection. This paper formulates a point target detection method utilizing super-resolution (SR) strategy concerning infrared scanning oversampling system, with an accelerated SR strategy proposed to realize fast de-aliasing of the oversampled image and an adaptive MRF-based regularization designed to achieve the preserving and aggregation of target energy. Extensive experiments demonstrate the superior detection performance, robustness and efficiency of the proposed method compared with other state-of-the-art approaches.
Utilizing broadband X-rays in a Bragg coherent X-ray diffraction imaging experiment
Cha, Wonsuk; Liu, Wenjun; Harder, Ross; ...
2016-07-26
A method is presented to simplify Bragg coherent X-ray diffraction imaging studies of complex heterogeneous crystalline materials with a two-stage screening/imaging process that utilizes polychromatic and monochromatic coherent X-rays and is compatible with in situ sample environments. Coherent white-beam diffraction is used to identify an individual crystal particle or grain that displays desired properties within a larger population. A three-dimensional reciprocal-space map suitable for diffraction imaging is then measured for the Bragg peak of interest using a monochromatic beam energy scan that requires no sample motion, thus simplifyingin situchamber design. This approach was demonstrated with Au nanoparticles and will enable,more » for example, individual grains in a polycrystalline material of specific orientation to be selected, then imaged in three dimensions while under load.« less
Utilizing broadband X-rays in a Bragg coherent X-ray diffraction imaging experiment.
Cha, Wonsuk; Liu, Wenjun; Harder, Ross; Xu, Ruqing; Fuoss, Paul H; Hruszkewycz, Stephan O
2016-09-01
A method is presented to simplify Bragg coherent X-ray diffraction imaging studies of complex heterogeneous crystalline materials with a two-stage screening/imaging process that utilizes polychromatic and monochromatic coherent X-rays and is compatible with in situ sample environments. Coherent white-beam diffraction is used to identify an individual crystal particle or grain that displays desired properties within a larger population. A three-dimensional reciprocal-space map suitable for diffraction imaging is then measured for the Bragg peak of interest using a monochromatic beam energy scan that requires no sample motion, thus simplifying in situ chamber design. This approach was demonstrated with Au nanoparticles and will enable, for example, individual grains in a polycrystalline material of specific orientation to be selected, then imaged in three dimensions while under load.
Optical coherence tomography for embryonic imaging: a review
Raghunathan, Raksha; Singh, Manmohan; Dickinson, Mary E.; Larin, Kirill V.
2016-01-01
Abstract. Embryogenesis is a highly complex and dynamic process, and its visualization is crucial for understanding basic physiological processes during development and for identifying and assessing possible defects, malformations, and diseases. While traditional imaging modalities, such as ultrasound biomicroscopy, micro-magnetic resonance imaging, and micro-computed tomography, have long been adapted for embryonic imaging, these techniques generally have limitations in their speed, spatial resolution, and contrast to capture processes such as cardiodynamics during embryogenesis. Optical coherence tomography (OCT) is a noninvasive imaging modality with micrometer-scale spatial resolution and imaging depth up to a few millimeters in tissue. OCT has bridged the gap between ultrahigh resolution imaging techniques with limited imaging depth like confocal microscopy and modalities, such as ultrasound sonography, which have deeper penetration but poorer spatial resolution. Moreover, the noninvasive nature of OCT has enabled live imaging of embryos without any external contrast agents. We review how OCT has been utilized to study developing embryos and also discuss advances in techniques used in conjunction with OCT to understand embryonic development. PMID:27228503
Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D
2012-07-01
Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.
USDA-ARS?s Scientific Manuscript database
The U. S. Department of Agriculture, Agricultural Research Service has been developing a method and system to detect fecal contamination on processed poultry carcasses with hyperspectral and multispectral imaging systems. The patented method utilizes a three step approach to contaminant detection. S...
Research in remote sensing of agriculture, earth resources, and man's environment
NASA Technical Reports Server (NTRS)
Landgrebe, D. A.
1975-01-01
Progress is reported for several projects involving the utilization of LANDSAT remote sensing capabilities. Areas under study include crop inventory, crop identification, crop yield prediction, forest resources evaluation, land resources evaluation and soil classification. Numerical methods for image processing are discussed, particularly those for image enhancement and analysis.
Gunn, Martin L; Marin, Jennifer R; Mills, Angela M; Chong, Suzanne T; Froemming, Adam T; Johnson, Jamlik O; Kumaravel, Manickam; Sodickson, Aaron D
2016-08-01
In May 2015, the Academic Emergency Medicine consensus conference "Diagnostic imaging in the emergency department: a research agenda to optimize utilization" was held. The goal of the conference was to develop a high-priority research agenda regarding emergency diagnostic imaging on which to base future research. In addition to representatives from the Society of Academic Emergency Medicine, the multidisciplinary conference included members of several radiology organizations: American Society for Emergency Radiology, Radiological Society of North America, the American College of Radiology, and the American Association of Physicists in Medicine. The specific aims of the conference were to (1) understand the current state of evidence regarding emergency department (ED) diagnostic imaging utilization and identify key opportunities, limitations, and gaps in knowledge; (2) develop a consensus-driven research agenda emphasizing priorities and opportunities for research in ED diagnostic imaging; and (3) explore specific funding mechanisms available to facilitate research in ED diagnostic imaging. Through a multistep consensus process, participants developed targeted research questions for future research in six content areas within emergency diagnostic imaging: clinical decision rules; use of administrative data; patient-centered outcomes research; training, education, and competency; knowledge translation and barriers to imaging optimization; and comparative effectiveness research in alternatives to traditional computed tomography use.
NASA Technical Reports Server (NTRS)
Edwards, M. H.; Arvidson, R. E.; Guinness, E. A.
1984-01-01
The problem of displaying information on the seafloor morphology is attacked by utilizing digital image processing techniques to generate images for Seabeam data covering three young seamounts on the eastern flank of the East Pacific Rise. Errors in locations between crossing tracks are corrected by interactively identifying features and translating tracks relative to a control track. Spatial interpolation techniques using moving averages are used to interpolate between gridded depth values to produce images in shaded relief and color-coded forms. The digitally processed images clarify the structural control on seamount growth and clearly show the lateral extent of volcanic materials, including the distribution and fault control of subsidiary volcanic constructional features. The image presentations also clearly show artifacts related to both residual navigational errors and to depth or location differences that depend on ship heading relative to slope orientation in regions with steep slopes.
Advanced Imaging Optics Utilizing Wavefront Coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen
2015-06-01
Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
Infrared-thermography imaging system multiapplications for manufacturing
NASA Astrophysics Data System (ADS)
Stern, Sharon A.
1990-03-01
Imaging systems technology has been utilized traditionally for diagnosing structural envelope or insulation problems in the general thermographic comunity. Industrially, new applications for utilizing thermal imaging technology have been developed i n pred i cti ve/preventi ye mai ntenance and prod uct moni tori ng prociures at Eastman Kodak Company, the largest photographic manufacturering producer in the world. In the manufacturing processes used at Eastman Kodak Company, new applications for thermal imaging include: (1) Fluid transfer line insulation (2) Web coating drying uniformity (3) Web slitter knives (4) Heating/cooling coils (5) Overheated tail bearings, and (6) Electrical phase imbalance. The substantial cost benefits gained from these applications of infrared thermography substantiate the practicality of this approach and indicate the desirability of researching further appl i cati ons.
Semi-automated Image Processing for Preclinical Bioluminescent Imaging.
Slavine, Nikolai V; McColl, Roderick W
Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, G.A.; Commer, M.
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/Lmore » supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.« less
Radar image and data fusion for natural hazards characterisation
Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Jixian; Zhang, Yonghong
2010-01-01
Fusion of synthetic aperture radar (SAR) images through interferometric, polarimetric and tomographic processing provides an all - weather imaging capability to characterise and monitor various natural hazards. This article outlines interferometric synthetic aperture radar (InSAR) processing and products and their utility for natural hazards characterisation, provides an overview of the techniques and applications related to fusion of SAR/InSAR images with optical and other images and highlights the emerging SAR fusion technologies. In addition to providing precise land - surface digital elevation maps, SAR - derived imaging products can map millimetre - scale elevation changes driven by volcanic, seismic and hydrogeologic processes, by landslides and wildfires and other natural hazards. With products derived from the fusion of SAR and other images, scientists can monitor the progress of flooding, estimate water storage changes in wetlands for improved hydrological modelling predictions and assessments of future flood impacts and map vegetation structure on a global scale and monitor its changes due to such processes as fire, volcanic eruption and deforestation. With the availability of SAR images in near real - time from multiple satellites in the near future, the fusion of SAR images with other images and data is playing an increasingly important role in understanding and forecasting natural hazards.
Rose Bengal Photothrombosis by Confocal Optical Imaging In Vivo: A Model of Single Vessel Stroke.
Talley Watts, Lora; Zheng, Wei; Garling, R Justin; Frohlich, Victoria C; Lechleiter, James Donald
2015-06-23
In vivo imaging techniques have increased in utilization due to recent advances in imaging dyes and optical technologies, allowing for the ability to image cellular events in an intact animal. Additionally, the ability to induce physiological disease states such as stroke in vivo increases its utility. The technique described herein allows for physiological assessment of cellular responses within the CNS following a stroke and can be adapted for other pathological conditions being studied. The technique presented uses laser excitation of the photosensitive dye Rose Bengal in vivo to induce a focal ischemic event in a single blood vessel. The video protocol demonstrates the preparation of a thin-skulled cranial window over the somatosensory cortex in a mouse for the induction of a Rose Bengal photothrombotic event keeping injury to the underlying dura matter and brain at a minimum. Surgical preparation is initially performed under a dissecting microscope with a custom-made surgical/imaging platform, which is then transferred to a confocal microscope equipped with an inverted objective adaptor. Representative images acquired utilizing this protocol are presented as well as time-lapse sequences of stroke induction. This technique is powerful in that the same area can be imaged repeatedly on subsequent days facilitating longitudinal in vivo studies of pathological processes following stroke.
NASA Astrophysics Data System (ADS)
Gupta, Shubhank; Panda, Aditi; Naskar, Ruchira; Mishra, Dinesh Kumar; Pal, Snehanshu
2017-11-01
Steels are alloys of iron and carbon, widely used in construction and other applications. The evolution of steel microstructure through various heat treatment processes is an important factor in controlling properties and performance of steel. Extensive experimentations have been performed to enhance the properties of steel by customizing heat treatment processes. However, experimental analyses are always associated with high resource requirements in terms of cost and time. As an alternative solution, we propose an image processing-based technique for refinement of raw plain carbon steel microstructure images, into a digital form, usable in experiments related to heat treatment processes of steel in diverse applications. The proposed work follows the conventional steps practiced by materials engineers in manual refinement of steel images; and it appropriately utilizes basic image processing techniques (including filtering, segmentation, opening, and clustering) to automate the whole process. The proposed refinement of steel microstructure images is aimed to enable computer-aided simulations of heat treatment of plain carbon steel, in a timely and cost-efficient manner; hence it is beneficial for the materials and metallurgy industry. Our experimental results prove the efficiency and effectiveness of the proposed technique.
Furuta, Akihiro; Onishi, Hideo; Nakamoto, Kenta
This study aimed at developing the realistic striatal digital brain (SDB) phantom and to assess specific binding ratio (SBR) for ventricular effect in the 123 I-FP-CIT SPECT imaging. SDB phantom was constructed in to four segments (striatum, ventricle, brain parenchyma, and skull bone) using Percentile method and other image processing in the T2-weighted MR images. The reference image was converted into 128×128 matrixes to align MR images with SPECT images. The process image was reconstructed with projection data sets generated from reference images additive blurring, attenuation, scatter, and statically noise. The SDB phantom was evaluated to find the accuracy of calculated SBR and to find the effect of SBR with/without ventricular counts with the reference and process images. We developed and investigated the utility of the SDB phantom in the 123 I-FP-CIT SPECT clinical study. The true value of SBR was just marched to calculate SBR from reference and process images. The SBR was underestimated 58.0% with ventricular counts in reference image, however, was underestimated 162% with ventricular counts in process images. The SDB phantom provides an extremely convenient tool for discovering basic properties of 123 I-FP-CIT SPECT clinical study image. It was suggested that the SBR was susceptible to ventricle.
Sajn, Luka; Kukar, Matjaž
2011-12-01
The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physician's judgment and may assist in decisions on cost effectiveness of tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Anthropometric body measurements based on multi-view stereo image reconstruction.
Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui
2013-01-01
Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.
Anthropometric Body Measurements Based on Multi-View Stereo Image Reconstruction*
Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui
2013-01-01
Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting automatic anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of proposed system. PMID:24109700
Overview of geostationary ocean color imager (GOCI) and GOCI data processing system (GDPS)
NASA Astrophysics Data System (ADS)
Ryu, Joo-Hyung; Han, Hee-Jeong; Cho, Seongick; Park, Young-Je; Ahn, Yu-Hwan
2012-09-01
GOCI, the world's first geostationary ocean color satellite, provides images with a spatial resolution of 500 m at hourly intervals up to 8 times a day, allowing observations of short-term changes in the Northeast Asian region. The GOCI Data Processing System (GDPS), a specialized data processing software for GOCI, was developed for real-time generation of various products. This paper describes GOCI characteristics and GDPS workflow/products, so as to enable the efficient utilization of GOCI. To provide quality images and data, atmospheric correction and data analysis algorithms must be improved through continuous Cal/Val. GOCI-II will be developed by 2018 to facilitate in-depth studies on geostationary ocean color satellites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azevedo, S.G.; Fitch, J.P.
1987-05-01
Conventional software interfaces which utilize imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal processing software (SIG). Existing ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal Lisp Environment (ISLE) will be discussed as an example of an interpreted functional language interface based on Common LISP. Additional benefits include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligencemore » software.« less
Clustering-based spot segmentation of cDNA microarray images.
Uslan, Volkan; Bucak, Ihsan Ömür
2010-01-01
Microarrays are utilized as that they provide useful information about thousands of gene expressions simultaneously. In this study segmentation step of microarray image processing has been implemented. Clustering-based methods, fuzzy c-means and k-means, have been applied for the segmentation step that separates the spots from the background. The experiments show that fuzzy c-means have segmented spots of the microarray image more accurately than the k-means.
Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX
2007-05-17
including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication
Anima: Modular Workflow System for Comprehensive Image Data Analysis
Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa
2014-01-01
Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541
Associative Memory In A Phase Conjugate Resonator Cavity Utilizing A Hologram
NASA Astrophysics Data System (ADS)
Owechko, Y.; Marom, E.; Soffer, B. H.; Dunning, G.
1987-01-01
The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,3,6,7 are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.
A real time mobile-based face recognition with fisherface methods
NASA Astrophysics Data System (ADS)
Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.
2018-03-01
Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.
Optical correlators for automated rendezvous and capture
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1991-01-01
The paper begins with a description of optical correlation. In this process, the propagation physics of coherent light is used to process images and extract information. The processed image is operated on as an area, rather than as a collection of points. An essentially instantaneous convolution is performed on that image to provide the sensory data. In this process, an image is sensed and encoded onto a coherent wavefront, and the propagation is arranged to create a bright spot of the image to match a model of the desired object. The brightness of the spot provides an indication of the degree of resemblance of the viewed image to the mode, and the location of the bright spot provides pointing information. The process can be utilized for AR&C to achieve the capability to identify objects among known reference types, estimate the object's location and orientation, and interact with the control system. System characteristics (speed, robustness, accuracy, small form factors) are adequate to meet most requirements. The correlator exploits the fact that Bosons and Fermions pass through each other. Since the image source is input as an electronic data set, conventional imagers can be used. In systems where the image is input directly, the correlating element must be at the sensing location.
Progressive cone beam CT dose control in image-guided radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan Hao; Cervino, Laura; Jiang, Steve B.
2013-06-15
Purpose: Cone beam CT (CBCT) in image-guided radiotherapy (IGRT) offers a tremendous advantage for treatment guidance. The associated imaging dose is a clinical concern. One unique feature of CBCT-based IGRT is that the same patient is repeatedly scanned during a treatment course, and the contents of CBCT images at different fractions are similar. The authors propose a progressive dose control (PDC) scheme to utilize this temporal correlation for imaging dose reduction. Methods: A dynamic CBCT scan protocol, as opposed to the static one in the current clinical practice, is proposed to gradually reduce the imaging dose in each treatment fraction.more » The CBCT image from each fraction is processed by a prior-image based nonlocal means (PINLM) module to enhance its quality. The increasing amount of prior information from previous CBCT images prevents degradation of image quality due to the reduced imaging dose. Two proof-of-principle experiments have been conducted using measured phantom data and Monte Carlo simulated patient data with deformation. Results: In the measured phantom case, utilizing a prior image acquired at 0.4 mAs, PINLM is able to improve the image quality of a CBCT acquired at 0.2 mAs by reducing the noise level from 34.95 to 12.45 HU. In the synthetic patient case, acceptable image quality is maintained at four consecutive fractions with gradually decreasing exposure levels of 0.4, 0.1, 0.07, and 0.05 mAs. When compared with the standard low-dose protocol of 0.4 mAs for each fraction, an overall imaging dose reduction of more than 60% is achieved. Conclusions: PINLM-PDC is able to reduce CBCT imaging dose in IGRT utilizing the temporal correlations among the sequence of CBCT images while maintaining the quality.« less
Robust watermark technique using masking and Hermite transform.
Coronel, Sandra L Gomez; Ramírez, Boris Escalante; Mosqueda, Marco A Acevedo
2016-01-01
The following paper evaluates a watermark algorithm designed for digital images by using a perceptive mask and a normalization process, thus preventing human eye detection, as well as ensuring its robustness against common processing and geometric attacks. The Hermite transform is employed because it allows a perfect reconstruction of the image, while incorporating human visual system properties; moreover, it is based on the Gaussian functions derivates. The applied watermark represents information of the digital image proprietor. The extraction process is blind, because it does not require the original image. The following techniques were utilized in the evaluation of the algorithm: peak signal-to-noise ratio, the structural similarity index average, the normalized crossed correlation, and bit error rate. Several watermark extraction tests were performed, with against geometric and common processing attacks. It allowed us to identify how many bits in the watermark can be modified for its adequate extraction.
An optical processor for object recognition and tracking
NASA Technical Reports Server (NTRS)
Sloan, J.; Udomkesmalee, S.
1987-01-01
The design and development of a miniaturized optical processor that performs real time image correlation are described. The optical correlator utilizes the Vander Lugt matched spatial filter technique. The correlation output, a focused beam of light, is imaged onto a CMOS photodetector array. In addition to performing target recognition, the device also tracks the target. The hardware, composed of optical and electro-optical components, occupies only 590 cu cm of volume. A complete correlator system would also include an input imaging lens. This optical processing system is compact, rugged, requires only 3.5 watts of operating power, and weighs less than 3 kg. It represents a major achievement in miniaturizing optical processors. When considered as a special-purpose processing unit, it is an attractive alternative to conventional digital image recognition processing. It is conceivable that the combined technology of both optical and ditital processing could result in a very advanced robot vision system.
Automatic tissue image segmentation based on image processing and deep learning
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.
MEMS scanning micromirror for optical coherence tomography.
Strathman, Matthew; Liu, Yunbo; Keeler, Ethan G; Song, Mingli; Baran, Utku; Xi, Jiefeng; Sun, Ming-Ting; Wang, Ruikang; Li, Xingde; Lin, Lih Y
2015-01-01
This paper describes an endoscopic-inspired imaging system employing a micro-electromechanical system (MEMS) micromirror scanner to achieve beam scanning for optical coherence tomography (OCT) imaging. Miniaturization of a scanning mirror using MEMS technology can allow a fully functional imaging probe to be contained in a package sufficiently small for utilization in a working channel of a standard gastroesophageal endoscope. This work employs advanced image processing techniques to enhance the images acquired using the MEMS scanner to correct non-idealities in mirror performance. The experimental results demonstrate the effectiveness of the proposed technique.
MEMS scanning micromirror for optical coherence tomography
Strathman, Matthew; Liu, Yunbo; Keeler, Ethan G.; Song, Mingli; Baran, Utku; Xi, Jiefeng; Sun, Ming-Ting; Wang, Ruikang; Li, Xingde; Lin, Lih Y.
2014-01-01
This paper describes an endoscopic-inspired imaging system employing a micro-electromechanical system (MEMS) micromirror scanner to achieve beam scanning for optical coherence tomography (OCT) imaging. Miniaturization of a scanning mirror using MEMS technology can allow a fully functional imaging probe to be contained in a package sufficiently small for utilization in a working channel of a standard gastroesophageal endoscope. This work employs advanced image processing techniques to enhance the images acquired using the MEMS scanner to correct non-idealities in mirror performance. The experimental results demonstrate the effectiveness of the proposed technique. PMID:25657887
Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat
2015-06-01
Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine.
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.
A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection
NASA Astrophysics Data System (ADS)
Tomono, Akira; Iida, Muneo; Kobayashi, Yukio
1990-04-01
This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.
An infrared-visible image fusion scheme based on NSCT and compressed sensing
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Maldague, Xavier
2015-05-01
Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.
Mehle, Andraž; Kitak, Domen; Podrekar, Gregor; Likar, Boštjan; Tomaževič, Dejan
2018-05-09
Agglomeration of pellets in fluidized bed coating processes is an undesirable phenomenon that affects the yield and quality of the product. In scope of PAT guidance, we present a system that utilizes visual imaging for in-line monitoring of the agglomeration degree. Seven pilot-scale Wurster coating processes were executed under various process conditions, providing a wide spectrum of process outcomes. Images of pellets were acquired during the coating processes in a contactless manner through an observation window of the coating apparatus. Efficient image analysis methods were developed for automatic recognition of discrete pellets and agglomerates in the acquired images. In-line obtained agglomeration degree trends revealed the agglomeration dynamics in distinct phases of the coating processes. We compared the in-line estimated agglomeration degree in the end point of each process to the results obtained by the off-line sieve analysis reference method. A strong positive correlation was obtained (coefficient of determination R 2 =0.99), confirming the feasibility of the approach. The in-line estimated agglomeration degree enables early detection of agglomeration and provides means for timely interventions to retain it in an acceptable range. Copyright © 2018 Elsevier B.V. All rights reserved.
Variability in imaging utilization in U.S. pediatric hospitals.
Arnold, Ryan W; Graham, Dionne A; Melvin, Patrice R; Taylor, George A
2011-07-01
Use of medical imaging is under scrutiny because of rising costs and radiation exposure. We compare imaging utilization and costs across pediatric hospitals to determine their variability and potential determinants. Data were extracted from the Pediatric Health Information System (PHIS) database for all inpatient encounters from 40 U.S. children's hospitals. Imaging utilization and costs were compared by insurance type, geographical region, hospital size, severity of illness, length of stay and type of imaging, all among specific diagnoses. The hospital with the highest utilization performed more than twice as many imaging studies per patient as the hospital with the lowest utilization. Similarly, imaging costs ranged from $154 to $671/patient. Median imaging-utilization rate was 1.7 exams/patient on the ward and increased significantly in the PICU (11.8 exams/patient) and in the NICU (17.7 exams per patient, (P < 0.001). Considerable variability in imaging utilization persisted despite adjustment for case mix index (CMI, range in variation 16.6-25%). We found a significant correlation between imaging utilization and both CMI and length of stay, P < 0.0001). However, only 36% of the variation in imaging utilization could be explained by CMI. Diagnostic imaging utilization and costs vary widely in pediatric hospitals.
NASA Astrophysics Data System (ADS)
Newman, Gregory A.; Commer, Michael
2009-07-01
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.
Swartz, Jordan; Koziatek, Christian; Theobald, Jason; Smith, Silas; Iturrate, Eduardo
2017-05-01
Testing for venous thromboembolism (VTE) is associated with cost and risk to patients (e.g. radiation). To assess the appropriateness of imaging utilization at the provider level, it is important to know that provider's diagnostic yield (percentage of tests positive for the diagnostic entity of interest). However, determining diagnostic yield typically requires either time-consuming, manual review of radiology reports or the use of complex and/or proprietary natural language processing software. The objectives of this study were twofold: 1) to develop and implement a simple, user-configurable, and open-source natural language processing tool to classify radiology reports with high accuracy and 2) to use the results of the tool to design a provider-specific VTE imaging dashboard, consisting of both utilization rate and diagnostic yield. Two physicians reviewed a training set of 400 lower extremity ultrasound (UTZ) and computed tomography pulmonary angiogram (CTPA) reports to understand the language used in VTE-positive and VTE-negative reports. The insights from this review informed the arguments to the five modifiable parameters of the NLP tool. A validation set of 2,000 studies was then independently classified by the reviewers and by the tool; the classifications were compared and the performance of the tool was calculated. The tool was highly accurate in classifying the presence and absence of VTE for both the UTZ (sensitivity 95.7%; 95% CI 91.5-99.8, specificity 100%; 95% CI 100-100) and CTPA reports (sensitivity 97.1%; 95% CI 94.3-99.9, specificity 98.6%; 95% CI 97.8-99.4). The diagnostic yield was then calculated at the individual provider level and the imaging dashboard was created. We have created a novel NLP tool designed for users without a background in computer programming, which has been used to classify venous thromboembolism reports with a high degree of accuracy. The tool is open-source and available for download at http://iturrate.com/simpleNLP. Results obtained using this tool can be applied to enhance quality by presenting information about utilization and yield to providers via an imaging dashboard. Copyright © 2017 Elsevier B.V. All rights reserved.
Applications of satellite image processing to the analysis of Amazonian cultural ecology
NASA Technical Reports Server (NTRS)
Behrens, Clifford A.
1991-01-01
This paper examines the application of satellite image processing towards identifying and comparing resource exploitation among indigenous Amazonian peoples. The use of statistical and heuristic procedures for developing land cover/land use classifications from Thematic Mapper satellite imagery will be discussed along with actual results from studies of relatively small (100 - 200 people) settlements. Preliminary research indicates that analysis of satellite imagery holds great potential for measuring agricultural intensification, comparing rates of tropical deforestation, and detecting changes in resource utilization patterns over time.
Safe patient handling in diagnostic imaging.
Murphey, Susan L
2010-01-01
Raising awareness of the risk to diagnostic imaging personnel from manually lifting, transferring, and repositioning patients is critical to improving workplace safety and staff utilization. The aging baby boomer generation and growing bariatric population exacerbate the problem. Also, legislative initiatives are increasing nationwide for hospitals to implement safe patient handling programs. A management process designed to improve working conditions through implementing ergonomic programs can reduce losses and improve productivity and patient care outcome measures for imaging departments.
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.
1975-01-01
An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.
Developments in Science and Technology.
1981-01-01
order to meet API ’s requirements for image processing, large data- base transfers, advanced graphic processing, and shar- Tte use of I)EC’net software...Descripion moored plant at an island site, with the electricity sup- plied by undersea cable to a shore utility grid. The Because the primary objective was
Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †
Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi
2016-01-01
During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781
A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing
NASA Astrophysics Data System (ADS)
Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi
2018-06-01
This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.
MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING
ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN
2013-01-01
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963
Near Real-Time Image Reconstruction
NASA Astrophysics Data System (ADS)
Denker, C.; Yang, G.; Wang, H.
2001-08-01
In recent years, post-facto image-processing algorithms have been developed to achieve diffraction-limited observations of the solar surface. We present a combination of frame selection, speckle-masking imaging, and parallel computing which provides real-time, diffraction-limited, 256×256 pixel images at a 1-minute cadence. Our approach to achieve diffraction limited observations is complementary to adaptive optics (AO). At the moment, AO is limited by the fact that it corrects wavefront abberations only for a field of view comparable to the isoplanatic patch. This limitation does not apply to speckle-masking imaging. However, speckle-masking imaging relies on short-exposure images which limits its spectroscopic applications. The parallel processing of the data is performed on a Beowulf-class computer which utilizes off-the-shelf, mass-market technologies to provide high computational performance for scientific calculations and applications at low cost. Beowulf computers have a great potential, not only for image reconstruction, but for any kind of complex data reduction. Immediate access to high-level data products and direct visualization of dynamic processes on the Sun are two of the advantages to be gained.
NASA Technical Reports Server (NTRS)
Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)
2011-01-01
A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.
Applied photo interpretation for airbrush cartography
NASA Technical Reports Server (NTRS)
Inge, J. L.; Bridges, P. M.
1976-01-01
New techniques of cartographic portrayal have been developed for the compilation of maps of lunar and planetary surfaces. Conventional photo interpretation methods utilizing size, shape, shadow, tone, pattern, and texture are applied to computer processed satellite television images. The variety of the image data allows the illustrator to interpret image details by inter-comparison and intra-comparison of photographs. Comparative judgements are affected by illumination, resolution, variations in surface coloration, and transmission or processing artifacts. The validity of the interpretation process is tested by making a representational drawing by an airbrush portrayal technique. Production controls insure the consistency of a map series. Photo interpretive cartographic portrayal skills are used to prepare two kinds of map series and are adaptable to map products of different kinds and purposes.
Applying NASA Imaging Radar Datasets to Investigate the Geomorphology of the Amazon's Planalto
NASA Astrophysics Data System (ADS)
McDonald, K. C.; Campbell, K.; Islam, R.; Alexander, P. M.; Cracraft, J.
2016-12-01
The Amazon basin is a biodiversity rich biome and plays a significant role into shaping Earth's climate, ocean and atmospheric gases. Understanding the history of the formation of this basin is essential to our understanding of the region's biodiversity and its response to climate change. During March 2013, the NASA/JPL L-band polarimetric airborne imaging radar, UAVSAR, conducted airborne studies over regions of South America including portions of the western Amazon basin. We utilize UAVSAR imagery acquired during that time over the Planalto, in the Madre de Dios region of southeastern Peru in an assessment of the underlying geomorphology, its relationship to the current distribution of vegetation, and its relationship to geologic processes through deep time. We employ UAVSAR data collections to assess the utility of these high quality imaging radar data for use in identifying geomorphologic features and vegetation communities within the context of improving the understanding of evolutionary processes, and their utility in aiding interpretation of datasets from Earth-orbiting satellites to support a basin-wide characterization across the Amazon. We derive maps of landcover and river branching structure from UAVSAR imagery. We compare these maps to those derived using imaging radar datasets from the Japanese Space Agency's ALOS PALSAR and Digital Elevation Models (DEMs) from NASA's Shuttle Radar Topography Mission (SRTM). Results provide an understanding of the underlying geomorphology of the Amazon planalto as well as its relationship to geologic processes and will support interpretation of the evolutionary history of the Amazon Basin. Portions of this work have been carried out within the framework of the ALOS Kyoto & Carbon Initiative. PALSAR data were provided by JAXA/EORC and the Alaska Satellite Facility.This work is carried out with support from the NASA Biodiversity Program and the NSF DIMENSIONS of Biodiversity Program.
An Explorative Study to Use DBD Plasma Generation for Aircraft Icing Mitigation
NASA Astrophysics Data System (ADS)
Hu, Hui; Zhou, Wenwu; Liu, Yang; Kolbakir, Cem
2017-11-01
An explorative investigation was performed to demonstrate the feasibility of utilizing thermal effect induced by Dielectric-Barrier-Discharge (DBD) plasma generation for aircraft icing mitigation. The experimental study was performed in an Icing Research Tunnel available at Iowa State University (i.e., ISU-IRT). A NACA0012 airfoil/wing model embedded with DBD plasma actuators was installed in ISU-IRT under typical glaze icing conditions pertinent to aircraft inflight icing phenomena. While a high-speed imaging system was used to record the dynamic ice accretion process over the airfoil surface for the test cases with and without switching on the DBD plasma actuators, an infrared (IR) thermal imaging system was utilized to map the corresponding temperature distributions to quantify the unsteady heat transfer and phase changing process over the airfoil surface. The thermal effect induced by DBD plasma generation was demonstrated to be able to keep the airfoil surface staying free of ice during the entire ice accretion experiment. The measured quantitative surface temperature distributions were correlated with the acquired images of the dynamic ice accretion and water runback processes to elucidate the underlying physics. National Science Foundation CBET-1064196 and CBET-1435590.
Reversible integer wavelet transform for blind image hiding method
Bibi, Nargis; Mahmood, Zahid; Akram, Tallha; Naqvi, Syed Rameez
2017-01-01
In this article, a blind data hiding reversible methodology to embed the secret data for hiding purpose into cover image is proposed. The key advantage of this research work is to resolve the privacy and secrecy issues raised during the data transmission over the internet. Firstly, data is decomposed into sub-bands using the integer wavelets. For decomposition, the Fresnelet transform is utilized which encrypts the secret data by choosing a unique key parameter to construct a dummy pattern. The dummy pattern is then embedded into an approximated sub-band of the cover image. Our proposed method reveals high-capacity and great imperceptibility of the secret embedded data. With the utilization of family of integer wavelets, the proposed novel approach becomes more efficient for hiding and retrieving process. It retrieved the secret hidden data from the embedded data blindly, without the requirement of original cover image. PMID:28498855
Image annotation by deep neural networks with attention shaping
NASA Astrophysics Data System (ADS)
Zheng, Kexin; Lv, Shaohe; Ma, Fang; Chen, Fei; Jin, Chi; Dou, Yong
2017-07-01
Image annotation is a task of assigning semantic labels to an image. Recently, deep neural networks with visual attention have been utilized successfully in many computer vision tasks. In this paper, we show that conventional attention mechanism is easily misled by the salient class, i.e., the attended region always contains part of the image area describing the content of salient class at different attention iterations. To this end, we propose a novel attention shaping mechanism, which aims to maximize the non-overlapping area between consecutive attention processes by taking into account the history of previous attention vectors. Several weighting polices are studied to utilize the history information in different manners. In two benchmark datasets, i.e., PASCAL VOC2012 and MIRFlickr-25k, the average precision is improved by up to 10% in comparison with the state-of-the-art annotation methods.
Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing
NASA Technical Reports Server (NTRS)
Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane
2012-01-01
Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.
"Seeing is believing": perspectives of applying imaging technology in discovery toxicology.
Xu, Jinghai James; Dunn, Margaret Condon; Smith, Arthur Russell
2009-11-01
Efficiency and accuracy in addressing drug safety issues proactively are critical in minimizing late-stage drug attritions. Discovery toxicology has become a specialty subdivision of toxicology seeking to effectively provide early predictions and safety assessment in the drug discovery process. Among the many technologies utilized to select safer compounds for further development, in vitro imaging technology is one of the best characterized and validated to provide translatable biomarkers towards clinically-relevant outcomes of drug safety. By carefully applying imaging technologies in genetic, hepatic, and cardiac toxicology, and integrating them with the rest of the drug discovery processes, it was possible to demonstrate significant impact of imaging technology on drug research and development and substantial returns on investment.
Single-random-phase holographic encryption of images
NASA Astrophysics Data System (ADS)
Tsang, P. W. M.
2017-02-01
In this paper, a method is proposed for encrypting an optical image onto a phase-only hologram, utilizing a single random phase mask as the private encryption key. The encryption process can be divided into 3 stages. First the source image to be encrypted is scaled in size, and pasted onto an arbitrary position in a larger global image. The remaining areas of the global image that are not occupied by the source image could be filled with randomly generated contents. As such, the global image as a whole is very different from the source image, but at the same time the visual quality of the source image is preserved. Second, a digital Fresnel hologram is generated from the new image, and converted into a phase-only hologram based on bi-directional error diffusion. In the final stage, a fixed random phase mask is added to the phase-only hologram as the private encryption key. In the decryption process, the global image together with the source image it contained, can be reconstructed from the phase-only hologram if it is overlaid with the correct decryption key. The proposed method is highly resistant to different forms of Plain-Text-Attacks, which are commonly used to deduce the encryption key in existing holographic encryption process. In addition, both the encryption and the decryption processes are simple and easy to implement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandoval, D; Mlady, G; Selwyn, R
Purpose: To bring together radiologists, technologists, and physicists to utilize post-processing techniques in digital radiography (DR) in order to optimize image acquisition and improve image quality. Methods: Sub-optimal images acquired on a new General Electric (GE) DR system were flagged for follow-up by radiologists and reviewed by technologists and medical physicists. Various exam types from adult musculoskeletal (n=35), adult chest (n=4), and pediatric (n=7) were chosen for review. 673 total images were reviewed. These images were processed using five customized algorithms provided by GE. An image score sheet was created allowing the radiologist to assign a numeric score to eachmore » of the processed images, this allowed for objective comparison to the original images. Each image was scored based on seven properties: 1) overall image look, 2) soft tissue contrast, 3) high contrast, 4) latitude, 5) tissue equalization, 6) edge enhancement, 7) visualization of structures. Additional space allowed for additional comments not captured in scoring categories. Radiologists scored the images from 1 – 10 with 1 being non-diagnostic quality and 10 being superior diagnostic quality. Scores for each custom algorithm for each image set were summed. The algorithm with the highest score for each image set was then set as the default processing. Results: Images placed into the PACS “QC folder” for image processing reasons decreased. Feedback from radiologists was, overall, that image quality for these studies had improved. All default processing for these image types was changed to the new algorithm. Conclusion: This work is an example of the collaboration between radiologists, technologists, and physicists at the University of New Mexico to add value to the radiology department. The significant amount of work required to prepare the processing algorithms, reprocessing and scoring of the images was eagerly taken on by all team members in order to produce better quality images and improve patient care.« less
ERIC Educational Resources Information Center
Vogelaar, Robert J.
2005-01-01
In this project a product to aid educational leaders in the process of communicating in crisis situations is presented. The product was created and received a formative evaluation using an educational research and development methodology. Ultimately, an administrative training course that utilized an Image Repair Situational Theory was developed.…
Meter-Scale 3-D Models of the Martian Surface from Combining MOC and MOLA Data
NASA Technical Reports Server (NTRS)
Soderblom, Laurence A.; Kirk, Randolph L.
2003-01-01
We have extended our previous efforts to derive through controlled photoclinometry, accurate, calibrated, high-resolution topographic models of the martian surface. The process involves combining MGS MOLA topographic profiles and MGS MOC Narrow Angle images. The earlier work utilized, along with a particular MOC NA image, the MOLA topographic profile that was acquired simultaneously, in order to derive photometric and scattering properties of the surface and atmosphere so as to force the low spatial frequencies of a one-dimensional MOC photoclinometric model to match the MOLA profile. Both that work and the new results reported here depend heavily on successful efforts to: 1) refine the radiometric calibration of MOC NA; 2) register the MOC to MOLA coordinate systems and refine the pointing; and 3) provide the ability to project into a common coordinate system, simultaneously acquired MOC and MOLA with a single set of SPICE kernels utilizing the USGS ISIS cartographic image processing tools. The approach described in this paper extends the MOC-MOLA integration and cross-calibration procedures from one-dimensional profiles to full two-dimensional photoclinometry and image simulations. Included are methods to account for low-frequency albedo variations within the scene.
CR softcopy display presets based on optimum visualization of specific findings
NASA Astrophysics Data System (ADS)
Andriole, Katherine P.; Gould, Robert G.; Webb, W. R.
1999-07-01
The purpose of this research is to assess the utility of providing presets for computed radiography (CR) softcopy display, based not on the window/level settings, but on image processing applied to the image based on optimization for visualization of specific findings, pathologies, etc. Clinical chest images are acquired using an Agfa ADC 70 CR scanner, and transferred over the PACS network to an image processing station which has the capability to perform multiscale contrast equalization. The optimal image processing settings per finding are developed in conjunction with a thoracic radiologist by manipulating the multiscale image contrast amplification algorithm parameters. Softcopy display of images processed with finding-specific settings are compared with the standard default image presentation for fifty cases of each category. Comparison is scored using a five point scale with positive one and two denoting the standard presentation is preferred over the finding-specific presets, negative one and two denoting the finding-specific preset is preferred over the standard presentation, and zero denoting no difference. Presets have been developed for pneumothorax and clinical cases are currently being collected in preparation for formal clinical trials. Subjective assessments indicate a preference for the optimized-preset presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.
The Pan-STARRS PS1 Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Magnier, E.
The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.
Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra
NASA Astrophysics Data System (ADS)
Luo, Yi; Celenk, Mehmet; Bejai, Prashanth
2006-03-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images.
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-03-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, S. F.; Izumi, N.; Glenn, S.
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. Here, for implosions with temperatures above ~4keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
NASA Astrophysics Data System (ADS)
AlShamsi, Meera R.
2016-10-01
Over the past years, there has been various urban development all over the UAE. Dubai is one of the cities that experienced rapid growth in both development and population. That growth can have a negative effect on the surrounding environment. Hence, there has been a necessity to protect the environment from these fast pace changes. One of the major impacts this growth can have is on vegetation. As technology is evolving day by day, there is a possibility to monitor changes that are happening on different areas in the world using satellite imagery. The data from these imageries can be utilized to identify vegetation in different areas of an image through a process called vegetation detection. Being able to detect and monitor vegetation is very beneficial for municipal planning and management, and environment authorities. Through this, analysts can monitor vegetation growth in various areas and analyze these changes. By utilizing satellite imagery with the necessary data, different types of vegetation can be studied and analyzed, such as parks, farms, and artificial grass in sports fields. In this paper, vegetation features are detected and extracted through SAFIY system (i.e. the Smart Application for Feature extraction and 3D modeling using high resolution satellite ImagerY) by using high-resolution satellite imagery from DubaiSat-2 and DEIMOS-2 satellites, which provide panchromatic images of 1m resolution and spectral bands (red, green, blue and near infrared) of 4m resolution. SAFIY system is a joint collaboration between MBRSC and DEIMOS Space UK. It uses image-processing algorithms to extract different features (roads, water, vegetation, and buildings) to generate vector maps data. The process to extract green areas (vegetation) utilize spectral information (such as, the red and near infrared bands) from the satellite images. These detected vegetation features will be extracted as vector data in SAFIY system and can be updated and edited by end-users, such as governmental entities and municipalities.
A specialized plug-in software module for computer-aided quantitative measurement of medical images.
Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H
2003-12-01
This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.
Backscatter absorption gas imaging system
McRae, Jr., Thomas G.
1985-01-01
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
Backscatter absorption gas imaging system
McRae, T.G. Jr.
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
Sakurai, T; Kawamata, R; Kozai, Y; Kaku, Y; Nakamura, K; Saito, M; Wakao, H; Kashima, I
2010-05-01
The aim of the study was to clarify the change in image quality upon X-ray dose reduction and to re-analyse the possibility of X-ray dose reduction in photostimulable phosphor luminescence (PSPL) X-ray imaging systems. In addition, the study attempted to verify the usefulness of multiobjective frequency processing (MFP) and flexible noise control (FNC) for X-ray dose reduction. Three PSPL X-ray imaging systems were used in this study. Modulation transfer function (MTF), noise equivalent number of quanta (NEQ) and detective quantum efficiency (DQE) were evaluated to compare the basic physical performance of each system. Subjective visual evaluation of diagnostic ability for normal anatomical structures was performed. The NEQ, DQE and diagnostic ability were evaluated at base X-ray dose, and 1/3, 1/10 and 1/20 of the base X-ray dose. The MTF of the systems did not differ significantly. The NEQ and DQE did not necessarily depend on the pixel size of the system. The images from all three systems had a higher diagnostic utility compared with conventional film images at the base and 1/3 X-ray doses. The subjective image quality was better at the base X-ray dose than at 1/3 of the base dose in all systems. The MFP and FNC-processed images had a higher diagnostic utility than the images without MFP and FNC. The use of PSPL imaging systems may allow a reduction in the X-ray dose to one-third of that required for conventional film. It is suggested that MFP and FNC are useful for radiation dose reduction.
The Utility of the Extended Images in Ambient Seismic Wavefield Migration
NASA Astrophysics Data System (ADS)
Girard, A. J.; Shragge, J. C.
2015-12-01
Active-source 3D seismic migration and migration velocity analysis (MVA) are robust and highly used methods for imaging Earth structure. One class of migration methods uses extended images constructed by incorporating spatial and/or temporal wavefield correlation lags to the imaging conditions. These extended images allow users to directly assess whether images focus better with different parameters, which leads to MVA techniques that are based on the tenets of adjoint-state theory. Under certain conditions (e.g., geographical, cultural or financial), however, active-source methods can prove impractical. Utilizing ambient seismic energy that naturally propagates through the Earth is an alternate method currently used in the scientific community. Thus, an open question is whether extended images are similarly useful for ambient seismic migration processing and verifying subsurface velocity models, and whether one can similarly apply adjoint-state methods to perform ambient migration velocity analysis (AMVA). Herein, we conduct a number of numerical experiments that construct extended images from ambient seismic recordings. We demonstrate that, similar to active-source methods, there is a sensitivity to velocity in ambient seismic recordings in the migrated extended image domain. In synthetic ambient imaging tests with varying degrees of error introduced to the velocity model, the extended images are sensitive to velocity model errors. To determine the extent of this sensitivity, we utilize acoustic wave-equation propagation and cross-correlation-based migration methods to image weak body-wave signals present in the recordings. Importantly, we have also observed scenarios where non-zero correlation lags show signal while zero-lags show none. This may be a valuable missing piece for ambient migration techniques that have yielded largely inconclusive results, and might be an important piece of information for performing AMVA from ambient seismic recordings.
Solution processed integrated pixel element for an imaging device
NASA Astrophysics Data System (ADS)
Swathi, K.; Narayan, K. S.
2016-09-01
We demonstrate the implementation of a solid state circuit/structure comprising of a high performing polymer field effect transistor (PFET) utilizing an oxide layer in conjunction with a self-assembled monolayer (SAM) as the dielectric and a bulk-heterostructure based organic photodiode as a CMOS-like pixel element for an imaging sensor. Practical usage of functional organic photon detectors requires on chip components for image capture and signal transfer as in the CMOS/CCD architecture rather than simple photodiode arrays in order to increase speed and sensitivity of the sensor. The availability of high performing PFETs with low operating voltage and photodiodes with high sensitivity provides the necessary prerequisite to implement a CMOS type image sensing device structure based on organic electronic devices. Solution processing routes in organic electronics offers relatively facile procedures to integrate these components, combined with unique features of large-area, form factor and multiple optical attributes. We utilize the inherent property of a binary mixture in a blend to phase-separate vertically and create a graded junction for effective photocurrent response. The implemented design enables photocharge generation along with on chip charge to voltage conversion with performance parameters comparable to traditional counterparts. Charge integration analysis for the passive pixel element using 2D TCAD simulations is also presented to evaluate the different processes that take place in the monolithic structure.
A data colocation grid framework for big data medical image processing: backend design
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design
Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-01-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668
Within-subject template estimation for unbiased longitudinal image analysis.
Reuter, Martin; Schmansky, Nicholas J; Rosas, H Diana; Fischl, Bruce
2012-07-16
Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. Copyright © 2012 Elsevier Inc. All rights reserved.
Automated analysis of hot spot X-ray images at the National Ignition Facility
NASA Astrophysics Data System (ADS)
Khan, S. F.; Izumi, N.; Glenn, S.; Tommasini, R.; Benedetti, L. R.; Ma, T.; Pak, A.; Kyrala, G. A.; Springer, P.; Bradley, D. K.; Town, R. P. J.
2016-11-01
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ˜4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Automated analysis of hot spot X-ray images at the National Ignition Facility
Khan, S. F.; Izumi, N.; Glenn, S.; ...
2016-09-02
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. Here, for implosions with temperatures above ~4keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Automated analysis of hot spot X-ray images at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, S. F., E-mail: khan9@llnl.gov; Izumi, N.; Glenn, S.
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ∼4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Automated analysis of hot spot X-ray images at the National Ignition Facility.
Khan, S F; Izumi, N; Glenn, S; Tommasini, R; Benedetti, L R; Ma, T; Pak, A; Kyrala, G A; Springer, P; Bradley, D K; Town, R P J
2016-11-01
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ∼4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
ERIC Educational Resources Information Center
Ginicola, Misty M.; Smith, Cheri; Trzaska, Jessica
2012-01-01
Creative approaches to counseling help counselors to meet the needs of diverse populations. The utility of photography in counseling has been demonstrated through several case studies; however, clear implications of how photography relates to the counseling process have not been well delineated. The existing literature on phototherapy is reviewed…
Color separation in forensic image processing using interactive differential evolution.
Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb
2015-01-01
Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.
MEMS-based thermally-actuated image stabilizer for cellular phone camera
NASA Astrophysics Data System (ADS)
Lin, Chun-Ying; Chiou, Jin-Chern
2012-11-01
This work develops an image stabilizer (IS) that is fabricated using micro-electro-mechanical system (MEMS) technology and is designed to counteract the vibrations when human using cellular phone cameras. The proposed IS has dimensions of 8.8 × 8.8 × 0.3 mm3 and is strong enough to suspend an image sensor. The processes that is utilized to fabricate the IS includes inductive coupled plasma (ICP) processes, reactive ion etching (RIE) processes and the flip-chip bonding method. The IS is designed to enable the electrical signals from the suspended image sensor to be successfully emitted out using signal output beams, and the maximum actuating distance of the stage exceeds 24.835 µm when the driving current is 155 mA. Depending on integration of MEMS device and designed controller, the proposed IS can decrease the hand tremor by 72.5%.
Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang
2016-01-01
Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214
Architecture for a PACS primary diagnosis workstation
NASA Astrophysics Data System (ADS)
Shastri, Kaushal; Moran, Byron
1990-08-01
A major factor in determining the overall utility of a medical Picture Archiving and Communications (PACS) system is the functionality of the diagnostic workstation. Meyer-Ebrecht and Wendler [1] have proposed a modular picture computer architecture with high throughput and Perry et.al [2] have defined performance requirements for radiology workstations. In order to be clinically useful, a primary diagnosis workstation must not only provide functions of current viewing systems (e.g. mechanical alternators [3,4]) such as acceptable image quality, simultaneous viewing of multiple images, and rapid switching of image banks; but must also provide a diagnostic advantage over the current systems. This includes window-level functions on any image, simultaneous display of multi-modality images, rapid image manipulation, image processing, dynamic image display (cine), electronic image archival, hardcopy generation, image acquisition, network support, and an easy user interface. Implementation of such a workstation requires an underlying hardware architecture which provides high speed image transfer channels, local storage facilities, and image processing functions. This paper describes the hardware architecture of the Siemens Diagnostic Reporting Console (DRC) which meets these requirements.
Laser-induced acoustic imaging of underground objects
NASA Astrophysics Data System (ADS)
Li, Wen; DiMarzio, Charles A.; McKnight, Stephen W.; Sauermann, Gerhard O.; Miller, Eric L.
1999-02-01
This paper introduces a new demining technique based on the photo-acoustic interaction, together with results from photo- acoustic experiments. We have buried different types of targets (metal, rubber and plastic) in different media (sand, soil and water) and imaged them by measuring reflection of acoustic waves generated by irradiation with a CO2 laser. Research has been focused on the signal acquisition and signal processing. A deconvolution method using Wiener filters is utilized in data processing. Using a uniform spatial distribution of laser pulses at the ground's surface, we obtained 3D images of buried objects. The images give us a clear representation of the shapes of the underground objects. The quality of the images depends on the mismatch of acoustic impedance of the buried objects, the bandwidth and center frequency of the acoustic sensors and the selection of filter functions.
Process for combining multiple passes of interferometric SAR data
Bickel, Douglas L.; Yocky, David A.; Hensley, Jr., William H.
2000-11-21
Interferometric synthetic aperture radar (IFSAR) is a promising technology for a wide variety of military and civilian elevation modeling requirements. IFSAR extends traditional two dimensional SAR processing to three dimensions by utilizing the phase difference between two SAR images taken from different elevation positions to determine an angle of arrival for each pixel in the scene. This angle, together with the two-dimensional location information in the traditional SAR image, can be transformed into geographic coordinates if the position and motion parameters of the antennas are known accurately.
Automatic Feature Extraction from Planetary Images
NASA Technical Reports Server (NTRS)
Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.
2010-01-01
With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.
Artificial intelligence and signal processing for infrastructure assessment
NASA Astrophysics Data System (ADS)
Assaleh, Khaled; Shanableh, Tamer; Yehia, Sherif
2015-04-01
The Ground Penetrating Radar (GPR) is being recognized as an effective nondestructive evaluation technique to improve the inspection process. However, data interpretation and complexity of the results impose some limitations on the practicality of using this technique. This is mainly due to the need of a trained experienced person to interpret images obtained by the GPR system. In this paper, an algorithm to classify and assess the condition of infrastructures utilizing image processing and pattern recognition techniques is discussed. Features extracted form a dataset of images of defected and healthy slabs are used to train a computer vision based system while another dataset is used to evaluate the proposed algorithm. Initial results show that the proposed algorithm is able to detect the existence of defects with about 77% success rate.
Two-dimensional signal processing with application to image restoration
NASA Technical Reports Server (NTRS)
Assefi, T.
1974-01-01
A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.
Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging
NASA Astrophysics Data System (ADS)
Chen, Tao; Jin, Guanghu; Dong, Zhen
2018-04-01
Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.
A Electro-Optical Image Algebra Processing System for Automatic Target Recognition
NASA Astrophysics Data System (ADS)
Coffield, Patrick Cyrus
The proposed electro-optical image algebra processing system is designed specifically for image processing and other related computations. The design is a hybridization of an optical correlator and a massively paralleled, single instruction multiple data processor. The architecture of the design consists of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined in terms of basic operations of an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how it implements the natural decomposition of algebraic functions into spatially distributed, point use operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The implementation of the proposed design may be accomplished in many ways. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control a large variety of the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental operation in the algebra, thus allowing a wide range of applications. No other known device or design has made this claim of processing speed and general implementation of a heterogeneous image algebra.
Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader
2004-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing for control and for robotics missions using vision sensors. It presents a top-level description of technologies required for the design and construction of SVIP and EASI and to advance the spatial-spectral imaging and large-scale space interferometry science and engineering.
IMAGESEER - IMAGEs for Education and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara
2012-01-01
IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.
Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image
NASA Astrophysics Data System (ADS)
Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.
2016-03-01
A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.
Single-image hard-copy display of the spine utilizing digital radiography
NASA Astrophysics Data System (ADS)
Artz, Dorothy S.; Janchar, Timothy; Milzman, David; Freedman, Matthew T.; Mun, Seong K.
1997-04-01
Regions of the entire spine contain a wide latitude of tissue densities within the imaged field of view presenting a problem for adequate radiological evaluation. With screen/film technology, the optimal technique for one area of the radiograph is sub-optimal for another area. Computed radiography (CR) with its inherent wide dynamic range, has been shown to be better than screen/film for lateral cervical spine imaging, but limitations are still present with standard image processing. By utilizing a dynamic range control (DRC) algorithm based on unsharp masking and signal transformation prior to gradation and frequency processing within the CR system, more vertebral bodies can be seen on a single hard copy display of the lateral cervical, thoracic, and thoracolumbar examinations. Examinations of the trauma cross-table lateral cervical spine, lateral thoracic spine, and lateral thoracolumbar spine were collected on live patient using photostimulable storage phosphor plates, the Fuji FCR 9000 reader, and the Fuji AC-3 computed radiography reader. Two images were produced from a single exposure; one with standard image processing and the second image with the standard process and the additional DRC algorithm. Both sets were printed from a Fuji LP 414 laser printer. Two different DRC algorithms were applied depending on which portion of the spine was not well visualized. One algorithm increased optical density and the second algorithm decreased optical density. The resultant image pairs were then reviewed by a panel of radiologists. Images produced with the additional DRC algorithm demonstrated improved visualization of previously 'under exposed' and 'over exposed' regions within the same image. Where lung field had previously obscured bony detail of the lateral thoracolumbar spine due to 'over exposure,' the image with the DRC applied to decrease the optical density allowed for easy visualization of the entire area of interest. For areas of the lateral cervical spine and lateral thoracic spine that typically have a low optical density value, the DRC algorithm used increased the optical density over that region improving visualization of C7-T2 and T11-L2 vertebral bodies; critical in trauma radiography. Emergency medicine physicians also reviewing the lateral cervical spine images were able to clear 37% of the DRC images compared to 30% of the non-DRC images for removal of the cervical collar. The DRC processed images reviewed by the physicians do not have a typical screen/film appearance; however, these different images were preferred for the three examinations in this study. This method of image processing after being tested and accepted, is in use clinically at Georgetown University Medical Center Department of Radiology for the following examinations: cervical spine, lateral thoracic spine, lateral thoracolumbar examinations, facial bones, shoulder, sternum, feet and portable chest. Computed radiography imaging of the spine is improved with the addition of histogram equalization known as dynamic range control (DRC). More anatomical structures are visualized on a single hard copy display.
Hyperspectral imaging from space: Warfighter-1
NASA Astrophysics Data System (ADS)
Cooley, Thomas; Seigel, Gary; Thorsos, Ivan
1999-01-01
The Air Force Research Laboratory Integrated Space Technology Demonstrations (ISTD) Program Office has partnered with Orbital Sciences Corporation (OSC) to complement the commercial satellite's high-resolution panchromatic imaging and Multispectral imaging (MSI) systems with a moderate resolution Hyperspectral imaging (HSI) spectrometer camera. The program is an advanced technology demonstration utilizing a commercially based space capability to provide unique functionality in remote sensing technology. This leveraging of commercial industry to enhance the value of the Warfighter-1 program utilizes the precepts of acquisition reform and is a significant departure from the old-school method of contracting for government managed large demonstration satellites with long development times and technology obsolescence concerns. The HSI system will be able to detect targets from the spectral signature measured by the hyperspectral camera. The Warfighter-1 program will also demonstrate the utility of the spectral information to theater military commanders and intelligence analysts by transmitting HSI data directly to a mobile ground station that receives and processes the data. After a brief history of the project origins, this paper will present the details of the Warfighter-1 system and expected results from exploitation of HSI data as well as the benefits realized by this collaboration between the Air Force and commercial industry.
Height Control and Deposition Measurement for the Electron Beam Free Form Fabrication (EBF3) Process
NASA Technical Reports Server (NTRS)
Hafley, Robert A. (Inventor); Seufzer, William J. (Inventor)
2017-01-01
A method of controlling a height of an electron beam gun and wire feeder during an electron freeform fabrication process includes utilizing a camera to generate an image of the molten pool of material. The image generated by the camera is utilized to determine a measured height of the electron beam gun relative to the surface of the molten pool. The method further includes ensuring that the measured height is within the range of acceptable heights of the electron beam gun relative to the surface of the molten pool. The present invention also provides for measuring a height of a solid metal deposit formed upon cooling of a molten pool. The height of a single point can be measured, or a plurality of points can be measured to provide 2D or 3D surface height measurements.
Real-time FPGA architectures for computer vision
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2000-03-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.
Land classification of south-central Iowa from computer enhanced images
NASA Technical Reports Server (NTRS)
Lucas, J. R.; Taranik, J. V.; Billingsley, F. C. (Principal Investigator)
1977-01-01
The author has identified the following significant results. Enhanced LANDSAT imagery was most useful for land classification purposes, because these images could be photographically printed at large scales such as 1:63,360. The ability to see individual picture elements was no hindrance as long as general image patterns could be discerned. Low cost photographic processing systems for color printings have proved to be effective in the utilization of computer enhanced LANDSAT products for land classification purposes. The initial investment for this type of system was very low, ranging from $100 to $200 beyond a black and white photo lab. The technical expertise can be acquired from reading a color printing and processing manual.
Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints
NASA Astrophysics Data System (ADS)
Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena
2012-04-01
Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an effective way for us to enhance the image quality at the matched regions between the prior and current images compared to the existing PICCS algorithm. Compared to the current CBCT imaging protocols, the APICCS algorithm allows an imaging dose reduction of 10-40 times due to the greatly reduced number of projections and lower x-ray tube current level coming from the low-dose protocol.
Radar image processing for rock-type discrimination
NASA Technical Reports Server (NTRS)
Blom, R. G.; Daily, M.
1982-01-01
Image processing and enhancement techniques for improving the geologic utility of digital satellite radar images are reviewed. Preprocessing techniques such as mean and variance correction on a range or azimuth line by line basis to provide uniformly illuminated swaths, median value filtering for four-look imagery to eliminate speckle, and geometric rectification using a priori elevation data. Examples are presented of application of preprocessing methods to Seasat and Landsat data, and Seasat SAR imagery was coregistered with Landsat imagery to form composite scenes. A polynomial was developed to distort the radar picture to fit the Landsat image of a 90 x 90 km sq grid, using Landsat color ratios with Seasat intensities. Subsequent linear discrimination analysis was employed to discriminate rock types from known areas. Seasat additions to the Landsat data improved rock identification by 7%.
Russian Character Recognition using Self-Organizing Map
NASA Astrophysics Data System (ADS)
Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.
2017-01-01
The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%
Radar Image Interpretability Analysis.
1981-01-01
the measured image properties with respect to image utility changed with image application. This study has provided useful information as to how...Eneea.d) ABSTRACT The utility of radar images with respect to trained image inter - preter ability to identify, classify and detect specific terrain... changed with image applica- tion. This study has provided useful information as to how certain image characteristics relate to radar image utility as
Suzuki, Kazuhiko; Oho, Eisaku
2013-01-01
Quality of a scanning electron microscopy (SEM) image is strongly influenced by noise. This is a fundamental drawback of the SEM instrument. Complex hysteresis smoothing (CHS) has been previously developed for noise removal of SEM images. This noise removal is performed by monitoring and processing properly the amplitude of the SEM signal. As it stands now, CHS may not be so utilized, though it has several advantages for SEM. For example, the resolution of image processed by CHS is basically equal to that of the original image. In order to find wide application of the CHS method in microscopy, the feature of CHS, which has not been so clarified until now is evaluated correctly. As the application of the result obtained by the feature evaluation, cursor width (CW), which is the sole processing parameter of CHS, is determined more properly using standard deviation of noise Nσ. In addition, disadvantage that CHS cannot remove the noise with excessively large amplitude is improved by a certain postprocessing. CHS is successfully applicable to SEM images with various noise amplitudes. © Wiley Periodicals, Inc.
Liquid crystal thermography and true-colour digital image processing
NASA Astrophysics Data System (ADS)
Stasiek, J.; Stasiek, A.; Jewartowski, M.; Collins, M. W.
2006-06-01
In the last decade thermochromic liquid crystals (TLC) and true-colour digital image processing have been successfully used in non-intrusive technical, industrial and biomedical studies and applications. Thin coatings of TLCs at surfaces are utilized to obtain detailed temperature distributions and heat transfer rates for steady or transient processes. Liquid crystals also can be used to make visible the temperature and velocity fields in liquids by the simple expedient of directly mixing the liquid crystal material into the liquid (water, glycerol, glycol, and silicone oils) in very small quantities to use as thermal and hydrodynamic tracers. In biomedical situations e.g., skin diseases, breast cancer, blood circulation and other medical application, TLC and image processing are successfully used as an additional non-invasive diagnostic method especially useful for screening large groups of potential patients. The history of this technique is reviewed, principal methods and tools are described and some examples are also presented.
GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing
NASA Astrophysics Data System (ADS)
Johl, John T.; Baker, Nick C.
1988-10-01
The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.
FTOOLS: A FITS Data Processing and Analysis Software Package
NASA Astrophysics Data System (ADS)
Blackburn, J. Kent; Greene, Emily A.; Pence, William
1993-05-01
FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. Each utility performs a single simple task such as presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual utilities can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. The collection of utilities provides both generic processing and analysis utilities and utilities common to high energy astrophysics data sets. The FTOOLS software package is designed to be both compatible with IRAF and completely stand alone in a UNIX or VMS environment. The user interface is controlled by standard IRAF parameter files. The package is self documenting through the IRAF help facility and a stand alone help task. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.
Applying industrial engineering practices to radiology.
Rosen, Len
2004-01-01
Seven hospitals in Oregon and Washington have successfully adopted the Toyota Production System (TPS). Developed by Taiichi Ohno, TPS focuses on finding efficiencies and cost savings in manufacturing processes. A similar effort has occurred in Canada, where Toronto's Hospital for Sick Children has developed a database for its diagnostic imaging department built on the principles of TPS applied to patient encounters. Developed over the last 5 years, the database currently manages all interventional patient procedures for quality assurance, inventory, equipment, and labor. By applying industrial engineering methodology to manufacturing processes, it is possible to manage these constraints, eliminate the obstacles to achieving streamlined processes, and keep the cost of delivering products and services under control. Industrial engineering methodology has encouraged all stakeholders in manufacturing plants to become participants in dealing with constraints. It has empowered those on the shop floor as well as management to become partners in the change process. Using a manufacturing process model to organize patient procedures enables imaging department and imaging centers to generate reports that can help them understand utilization of labor, materials, equipment, and rooms. Administrators can determine the cost of individual procedures as well as the total and average cost of specific procedure types. When Toronto's Hospital for Sick Children first implemented industrial engineering methodology to medical imaging interventional radiology patient encounters, it focused on materials management. Early in the process, the return on investment became apparent as the department improved its management of more than 500,000 dollars of inventory. The calculated accumulated savings over 4 years for 10,000 interventional procedures alone amounted to more than 140,000 dollars. The medical imaging department in this hospital is only now beginning to apply what it has learned to other factors contributing to case cost. It has started to analyze its service contracts with equipment vendors. The department also is accumulating data to measure room, equipment, and labor utilization. The hospital now has a true picture of the real cost associated with each patient encounter in medical imaging. It can now begin to manage case costs, perform better capacity planning, create more effective relationships with its material suppliers, and optimize scheduling of patients and staff.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Plaza, Javier; Paz, Abel
2010-10-01
Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.
The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1992-01-01
The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.
High-performance image processing architecture
NASA Astrophysics Data System (ADS)
Coffield, Patrick C.
1992-04-01
The proposed architecture is a logical design specifically for image processing and other related computations. The design is a hybrid electro-optical concept consisting of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined by an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how elegantly it handles the natural decomposition of algebraic functions into spatially distributed, point-wise operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The logical architecture may take any number of physical forms. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control all the arithmetic and logic operations of the image algebra's generalized matrix product. This is the most powerful fundamental formulation in the algebra, thus allowing a wide range of applications.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-01-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality. PMID:28663860
Rodríguez, Jaime; Martín, María T; Herráez, José; Arias, Pedro
2008-12-10
Photogrammetry is a science with many fields of application in civil engineering where image processing is used for different purposes. In most cases, the use of multiple images simultaneously for the reconstruction of 3D scenes is commonly used. However, the use of isolated images is becoming more and more frequent, for which it is necessary to calculate the orientation of the image with respect to the object space (exterior orientation), which is usually made through three rotations through known points in the object space (Euler angles). We describe the resolution of this problem by means of a single rotation through the vanishing line of the image space and completely external to the object, to be more precise, without any contact with it. The results obtained appear to be optimal, and the procedure is simple and of great utility, since no points over the object are required, which is very useful in situations where access is difficult.
Portable laser speckle perfusion imaging system based on digital signal processor.
Tang, Xuejun; Feng, Nengyun; Sun, Xiaoli; Li, Pengcheng; Luo, Qingming
2010-12-01
The ability to monitor blood flow in vivo is of major importance in clinical diagnosis and in basic researches of life science. As a noninvasive full-field technique without the need of scanning, laser speckle contrast imaging (LSCI) is widely used to study blood flow with high spatial and temporal resolution. Current LSCI systems are based on personal computers for image processing with large size, which potentially limit the widespread clinical utility. The need for portable laser speckle contrast imaging system that does not compromise processing efficiency is crucial in clinical diagnosis. However, the processing of laser speckle contrast images is time-consuming due to the heavy calculation for enormous high-resolution image data. To address this problem, a portable laser speckle perfusion imaging system based on digital signal processor (DSP) and the algorithm which is suitable for DSP is described. With highly integrated DSP and the algorithm, we have markedly reduced the size and weight of the system as well as its energy consumption while preserving the high processing speed. In vivo experiments demonstrate that our portable laser speckle perfusion imaging system can obtain blood flow images at 25 frames per second with the resolution of 640 × 480 pixels. The portable and lightweight features make it capable of being adapted to a wide variety of application areas such as research laboratory, operating room, ambulance, and even disaster site.
Overview of High Speed Close-Up Imaging in an Icing Environment
NASA Technical Reports Server (NTRS)
Miller, Dean R.; Lynch, Christopher J.; Tate, Peter A.
2004-01-01
The Icing Branch and Imaging Technology Center at NASA Glenn Research Center have recently been involved in several projects where high speed close-up imaging was used to investigate water droplet impact/splash, and also ice particle impact/bounce in an icing wind tunnel. The combination of close-up and high speed imaging capabilities were required because the particles being studied were relatively small (d < 1 mm in diameter), and the impact process occurred in a very short time period (t(sub impact) << 1 sec). High speed close-up imaging was utilized to study the dynamics of droplet impact and splash in simulated Supercooled Large Droplet (SLD) icing conditions. The objective of this test was to evaluate the capability of a ultra high speed camera system to acquire quantitative information about the impact process (e.g., droplet size, velocity). Imaging data were obtained in an icing wind tunnel for spray cloud MVD > 50 m. High speed close-up imaging was also utilized to characterize the impact of ice particles on an airfoil with a thermally protected leading edge. The objective of this investigation was to determine whether ice particles tend to "stick" or "bounce" after impact. Imaging data were obtained for cases where the airfoil surface was heated and unheated. Based on the results from this test, follow on tests were conducted to investigate ice particle impact on the sensing elements of water content measurement devices. This paper will describe the use of the imaging systems to support these experimental investigations, present some representative results, and summarize what was learned about the use of these systems in an icing environment.
HPC enabled real-time remote processing of laparoscopic surgery
NASA Astrophysics Data System (ADS)
Ronaghi, Zahra; Sapra, Karan; Izard, Ryan; Duffy, Edward; Smith, Melissa C.; Wang, Kuang-Ching; Kwartowitz, David M.
2016-03-01
Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.
Reductions in Diagnostic Imaging With High Deductible Health Plans.
Zheng, Sarah; Ren, Zhong Justin; Heineke, Janelle; Geissler, Kimberley H
2016-02-01
Diagnostic imaging utilization grew rapidly over the past 2 decades. It remains unclear whether patient cost-sharing is an effective policy lever to reduce imaging utilization and spending. Using 2010 commercial insurance claims data of >21 million individuals, we compared diagnostic imaging utilization and standardized payments between High Deductible Health Plan (HDHP) and non-HDHP enrollees. Negative binomial models were used to estimate associations between HDHP enrollment and utilization, and were repeated for standardized payments. A Hurdle model were used to estimate associations between HDHP enrollment and whether an enrollee had diagnostic imaging, and then the magnitude of associations for enrollees with imaging. Models with interaction terms were used to estimate associations between HDHP enrollment and imaging by risk score tercile. All models included controls for patient age, sex, geographic location, and health status. HDHP enrollment was associated with a 7.5% decrease in the number of imaging studies and a 10.2% decrease in standardized imaging payments. HDHP enrollees were 1.8% points less likely to use imaging; once an enrollee had at least 1 imaging study, differences in utilization and associated payments were small. Associations between HDHP and utilization were largest in the lowest (least sick) risk score tercile. Increased patient cost-sharing may contribute to reductions in diagnostic imaging utilization and spending. However, increased cost-sharing may not encourage patients to differentiate between high-value and low-value diagnostic imaging services; better patient awareness and education may be a crucial part of any reductions in diagnostic imaging utilization.
NASA Astrophysics Data System (ADS)
Lewis, Adam D.; Katta, Nitesh; McElroy, Austin; Milner, Thomas; Fish, Scott; Beaman, Joseph
2018-04-01
Optical coherence tomography (OCT) has shown promise as a process sensor in selective laser sintering (SLS) due to its ability to yield depth-resolved data not attainable with conventional sensors. However, OCT images of nylon 12 powder and nylon 12 components fabricated via SLS contain artifacts that have not been previously investigated in the literature. A better understanding of light interactions with SLS powder and components is foundational for further research expanding the utility of OCT imaging in SLS and other additive manufacturing (AM) sensing applications. Specifically, in this work, nylon powder and sintered parts were imaged in air and in an index matching liquid. Subsequent image analysis revealed the cause of "signal-tail" OCT image artifacts to be a combination of both inter and intraparticle multiple-scattering and reflections. Then, the OCT imaging depth of nylon 12 powder and the contrast-to-noise ratio of a sintered part were improved through the use of an index matching liquid. Finally, polymer crystals were identified as the main source of intraparticle scattering in nylon 12 powder. Implications of these results on future research utilizing OCT in SLS are also given.
Retinal image quality assessment based on image clarity and content
NASA Astrophysics Data System (ADS)
Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim
2016-09-01
Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.
NASA Technical Reports Server (NTRS)
Russell, O. R. (Principal Investigator); Nichols, D. A.; Anderson, R.
1977-01-01
The author has identified the following significant results. Evaluation of LANDSAT imagery indicates severe limitations in its utility for surface mine land studies. Image stripping resulting from unequal detector response on satellite degrades the image quality to the extent that images of scales larger than 1:125,000 are of limited value for manual interpretation. Computer processing of LANDSAT data to improve image quality is essential; the removal of scanline stripping and enhancement of mine land reflectance data combined with color composite printing permits useful photographic enlargements to approximately 1:60,000.
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.
Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes.
Yeh, Chun-mao; Zhou, Wei; Lu, Yao-bing; Yang, Jian
2016-01-20
This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D) imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs). Then, the rotating velocity (RV) is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.
Automatic Extraction of Planetary Image Features
NASA Technical Reports Server (NTRS)
Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.
2009-01-01
With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.
NASA Astrophysics Data System (ADS)
Tokareva, Victoria
2018-04-01
New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.
NASA Astrophysics Data System (ADS)
Szu, Harold H.
1993-09-01
Classical artificial neural networks (ANN) and neurocomputing are reviewed for implementing a real time medical image diagnosis. An algorithm known as the self-reference matched filter that emulates the spatio-temporal integration ability of the human visual system might be utilized for multi-frame processing of medical imaging data. A Cauchy machine, implementing a fast simulated annealing schedule, can determine the degree of abnormality by the degree of orthogonality between the patient imagery and the class of features of healthy persons. An automatic inspection process based on multiple modality image sequences is simulated by incorporating the following new developments: (1) 1-D space-filling Peano curves to preserve the 2-D neighborhood pixels' relationship; (2) fast simulated Cauchy annealing for the global optimization of self-feature extraction; and (3) a mini-max energy function for the intra-inter cluster-segregation respectively useful for top-down ANN designs.
Imaging Systems for Size Measurements of Debrisat Fragments
NASA Technical Reports Server (NTRS)
Shiotani, B.; Scruggs, T.; Toledo, R.; Fitz-Coy, N.; Liou, J. C.; Sorge, M.; Huynh, T.; Opiela, J.; Krisko, P.; Cowardin, H.
2017-01-01
The overall objective of the DebriSat project is to provide data to update existing standard spacecraft breakup models. One of the key sets of parameters used in these models is the physical dimensions of the fragments (i.e., length, average-cross sectional area, and volume). For the DebriSat project, only fragments with at least one dimension greater than 2 mm are collected and processed. Additionally, a significant portion of the fragments recovered from the impact test are needle-like and/or flat plate-like fragments where their heights are almost negligible in comparison to their other dimensions. As a result, two fragment size categories were defined: 2D objects and 3D objects. While measurement systems are commercially available, factors such as measurement rates, system adaptability, size characterization limitations and equipment costs presented significant challenges to the project and a decision was made to develop our own size characterization systems. The size characterization systems consist of two automated image systems, one referred to as the 3D imaging system and the other as the 2D imaging system. Which imaging system to use depends on the classification of the fragment being measured. Both imaging systems utilize point-and-shoot cameras for object image acquisition and create representative point clouds of the fragments. The 3D imaging system utilizes a space-carving algorithm to generate a 3D point cloud, while the 2D imaging system utilizes an edge detection algorithm to generate a 2D point cloud. From the point clouds, the three largest orthogonal dimensions are determined using a convex hull algorithm. For 3D objects, in addition to the three largest orthogonal dimensions, the volume is computed via an alpha-shape algorithm applied to the point clouds. The average cross-sectional area is also computed for 3D objects. Both imaging systems have automated size measurements (image acquisition and image processing) driven by the need to quickly and accurately measure tens of thousands of debris fragments. Moreover, the automated size measurement reduces potential fragment damage/mishandling and ability for accuracy and repeatability. As the fragment characterization progressed, it became evident that the imaging systems had to be revised. For example, an additional view was added to the 2D imaging system to capture the height of the 2D object. This paper presents the DebriSat project's imaging systems and calculation techniques in detail; from design and development to maturation. The experiences and challenges are also shared.
Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N
2017-03-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.
Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.
2016-01-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692
Image retrieval and processing system version 2.0 development work
NASA Technical Reports Server (NTRS)
Slavney, Susan H.; Guinness, Edward A.
1991-01-01
The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.
MO-DE-207-04: Imaging educational program on solutions to common pediatric imaging challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamurthy, R.
This imaging educational program will focus on solutions to common pediatric imaging challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. The educational program will begin with a detailed discussion of the optimal configuration of fluoroscopes for general pediatric procedures. Following this introduction will be a focused discussion on the utility of Dual Energy CT for imaging children. The third lecture will address the substantial challenge of obtaining consistent image post -processing in pediatric digital radiography. The fourth and final lecture will address best practices in pediatric MRI includingmore » a discussion of ancillary methods to reduce sedation and anesthesia rates. Learning Objectives: To learn techniques for optimizing radiation dose and image quality in pediatric fluoroscopy To become familiar with the unique challenges and applications of Dual Energy CT in pediatric imaging To learn solutions for consistent post-processing quality in pediatric digital radiography To understand the key components of an effective MRI safety and quality program for the pediatric practice.« less
WHOLE BODY NONRIGID CT-PET REGISTRATION USING WEIGHTED DEMONS.
Suh, J W; Kwon, Oh-K; Scheinost, D; Sinusas, A J; Cline, Gary W; Papademetris, X
2011-03-30
We present a new registration method for whole-body rat computed tomography (CT) image and positron emission tomography (PET) images using a weighted demons algorithm. The CT and PET images are acquired in separate scanners at different times and the inherent differences in the imaging protocols produced significant nonrigid changes between the two acquisitions in addition to heterogeneous image characteristics. In this situation, we utilized both the transmission-PET and the emission-PET images in the deformable registration process emphasizing particular regions of the moving transmission-PET image using the emission-PET image. We validated our results with nine rat image sets using M-Hausdorff distance similarity measure. We demonstrate improved performance compared to standard methods such as Demons and normalized mutual information-based non-rigid FFD registration.
Ore minerals textural characterization by hyperspectral imaging
NASA Astrophysics Data System (ADS)
Bonifazi, Giuseppe; Picone, Nicoletta; Serranti, Silvia
2013-02-01
The utilization of hyperspectral detection devices, for natural resources mapping/exploitation through remote sensing techniques, dates back to the early 1970s. From the first devices utilizing a one-dimensional profile spectrometer, HyperSpectral Imaging (HSI) devices have been developed. Thus, from specific-customized devices, originally developed by Governmental Agencies (e.g. NASA, specialized research labs, etc.), a lot of HSI based equipment are today available at commercial level. Parallel to this huge increase of hyperspectral systems development/manufacturing, addressed to airborne application, a strong increase also occurred in developing HSI based devices for "ground" utilization that is sensing units able to play inside a laboratory, a processing plant and/or in an open field. Thanks to this diffusion more and more applications have been developed and tested in this last years also in the materials sectors. Such an approach, when successful, is quite challenging being usually reliable, robust and characterised by lower costs if compared with those usually associated to commonly applied analytical off- and/or on-line analytical approaches. In this paper such an approach is presented with reference to ore minerals characterization. According to the different phases and stages of ore minerals and products characterization, and starting from the analyses of the detected hyperspectral firms, it is possible to derive useful information about mineral flow stream properties and their physical-chemical attributes. This last aspect can be utilized to define innovative process mineralogy strategies and to implement on-line procedures at processing level. The present study discusses the effects related to the adoption of different hardware configurations, the utilization of different logics to perform the analysis and the selection of different algorithms according to the different characterization, inspection and quality control actions to apply.
Sinha, S K; Karray, F
2002-01-01
Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.
NASA Astrophysics Data System (ADS)
Murakoshi, Dai; Hirota, Kazuhiro; Ishii, Hiroyasu; Hashimoto, Atsushi; Ebata, Tetsurou; Irisawa, Kaku; Wada, Takatsugu; Hayakawa, Toshiro; Itoh, Kenji; Ishihara, Miya
2018-02-01
Photoacoustic (PA) imaging technology is expected to be applied to clinical assessment for peripheral vascularity. We started a clinical evaluation with the prototype PA imaging system we recently developed. Prototype PA imaging system was composed with in-house Q-switched Alexandrite laser system which emits short-pulsed laser with 750 nm wavelength, handheld ultrasound transducer where illumination optics were integrated and signal processing for PA image reconstruction implemented in the clinical ultrasound (US) system. For the purpose of quantitative assessment of PA images, an image analyzing function has been developed and applied to clinical PA images. In this analyzing function, vascularity derived from PA signal intensity ranged for prescribed threshold was defined as a numerical index of vessel fulfillment and calculated for the prescribed region of interest (ROI). Skin surface was automatically detected by utilizing B-mode image acquired simultaneously with PA image. Skinsurface position is utilized to place the ROI objectively while avoiding unwanted signals such as artifacts which were imposed due to melanin pigment in the epidermal layer which absorbs laser emission and generates strong PA signals. Multiple images were available to support the scanned image set for 3D viewing. PA images for several fingers of patients with systemic sclerosis (SSc) were quantitatively assessed. Since the artifact region is trimmed off in PA images, the visibility of vessels with rather low PA signal intensity on the 3D projection image was enhanced and the reliability of the quantitative analysis was improved.
NASA Astrophysics Data System (ADS)
Pape, Dennis R.
1990-09-01
The present conference discusses topics in optical image processing, optical signal processing, acoustooptic spectrum analyzer systems and components, and optical computing. Attention is given to tradeoffs in nonlinearly recorded matched filters, miniature spatial light modulators, detection and classification using higher-order statistics of optical matched filters, rapid traversal of an image data base using binary synthetic discriminant filters, wideband signal processing for emitter location, an acoustooptic processor for autonomous SAR guidance, and sampling of Fresnel transforms. Also discussed are an acoustooptic RF signal-acquisition system, scanning acoustooptic spectrum analyzers, the effects of aberrations on acoustooptic systems, fast optical digital arithmetic processors, information utilization in analog and digital processing, optical processors for smart structures, and a self-organizing neural network for unsupervised learning.
Containerless automated processing of intermetallic compounds and composites
NASA Technical Reports Server (NTRS)
Johnson, D. R.; Joslin, S. M.; Reviere, R. D.; Oliver, B. F.; Noebe, R. D.
1993-01-01
An automated containerless processing system has been developed to directionally solidify high temperature materials, intermetallic compounds, and intermetallic/metallic composites. The system incorporates a wide range of ultra-high purity chemical processing conditions. The utilization of image processing for automated control negates the need for temperature measurements for process control. The list of recent systems that have been processed includes Cr, Mo, Mn, Nb, Ni, Ti, V, and Zr containing aluminides. Possible uses of the system, process control approaches, and properties and structures of recently processed intermetallics are reviewed.
Optimizing hippocampal segmentation in infants utilizing MRI post-acquisition processing.
Thompson, Deanne K; Ahmadzai, Zohra M; Wood, Stephen J; Inder, Terrie E; Warfield, Simon K; Doyle, Lex W; Egan, Gary F
2012-04-01
This study aims to determine the most reliable method for infant hippocampal segmentation by comparing magnetic resonance (MR) imaging post-acquisition processing techniques: contrast to noise ratio (CNR) enhancement, or reformatting to standard orientation. MR scans were performed with a 1.5 T GE scanner to obtain dual echo T2 and proton density (PD) images at term equivalent (38-42 weeks' gestational age). 15 hippocampi were manually traced four times on ten infant images by 2 independent raters on the original T2 image, as well as images processed by: a) combining T2 and PD images (T2-PD) to enhance CNR; then b) reformatting T2-PD images perpendicular to the long axis of the left hippocampus. CNRs and intraclass correlation coefficients (ICC) were calculated. T2-PD images had 17% higher CNR (15.2) than T2 images (12.6). Original T2 volumes' ICC was 0.87 for rater 1 and 0.84 for rater 2, whereas T2-PD images' ICC was 0.95 for rater 1 and 0.87 for rater 2. Reliability of hippocampal segmentation on T2-PD images was not improved by reformatting images (rater 1 ICC = 0.88, rater 2 ICC = 0.66). Post-acquisition processing can improve CNR and hence reliability of hippocampal segmentation in neonate MR scans when tissue contrast is poor. These findings may be applied to enhance boundary definition in infant segmentation for various brain structures or in any volumetric study where image contrast is sub-optimal, enabling hippocampal structure-function relationships to be explored.
PTBS segmentation scheme for synthetic aperture radar
NASA Astrophysics Data System (ADS)
Friedland, Noah S.; Rothwell, Brian J.
1995-07-01
The Image Understanding Group at Martin Marietta Technologies in Denver, Colorado has developed a model-based synthetic aperture radar (SAR) automatic target recognition (ATR) system using an integrated resource architecture (IRA). IRA, an adaptive Markov random field (MRF) environment, utilizes information from image, model, and neighborhood resources to create a discrete, 2D feature-based world description (FBWD). The IRA FBWD features are peak, target, background and shadow (PTBS). These features have been shown to be very useful for target discrimination. The FBWD is used to accrue evidence over a model hypothesis set. This paper presents the PTBS segmentation process utilizing two IRA resources. The image resource (IR) provides generic (the physics of image formation) and specific (the given image input) information. The neighborhood resource (NR) provides domain knowledge of localized FBWD site behaviors. A simulated annealing optimization algorithm is used to construct a `most likely' PTBS state. Results on simulated imagery illustrate the power of this technique to correctly segment PTBS features, even when vehicle signatures are immersed in heavy background clutter. These segmentations also suppress sidelobe effects and delineate shadows.
2008-11-01
17ºC; red: 17-18ºC. Although the image produced in Figure 9 is useful, the image itself is not the most important aspect of the process . Two...climatology for the Scotian Shelf. The database is intended for use while ashore and also while at-sea. Trial Q316 was the maiden voyage of the database...to the process of data transfer from external sources to the database, and also how the database can be restructured to be more accommodating of
A cost analysis comparing xeroradiography to film technics for intraoral radiography.
Gratt, B M; Sickles, E A
1986-01-01
In the United States during 1978 $730 million was spent on dental radiographic services. Currently there are three alternatives for the processing of intraoral radiographs: manual wet-tanks, automatic film units, or xeroradiography. It was the intent of this study to determine which processing system is the most economical. Cost estimates were based on a usage rate of 750 patient images per month and included a calculation of the average cost per radiograph over a five-year period. Capital costs included initial processing equipment and site preparation. Operational costs included labor, supplies, utilities, darkroom rental, and breakdown costs. Clinical time trials were employed to measure examination times. Maintenance logs were employed to assess labor costs. Indirect costs of training were estimated. Results indicated that xeroradiography was the most cost effective ($0.81 per image) compared to either automatic film processing ($1.14 per image) or manual processing ($1.35 per image). Variations in projected costs indicated that if a dental practice performs primarily complete-mouth surveys, exposes less than 120 radiographs per month, and pays less than +6.50 per hour in wages, then manual (wet-tank) processing is the most economical method for producing intraoral radiographs.
MTI science, data products, and ground-data processing overview
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Atkins, William H.; Balick, Lee K.; Borel, Christoph C.; Clodius, William B.; Christensen, R. Wynn; Davis, Anthony B.; Echohawk, J. C.; Galbraith, Amy E.; Hirsch, Karen L.; Krone, James B.; Little, Cynthia K.; McLachlan, Peter M.; Morrison, Aaron; Pollock, Kimberly A.; Pope, Paul A.; Novak, Curtis; Ramsey, Keri A.; Riddle, Emily E.; Rohde, Charles A.; Roussel-Dupre, Diane C.; Smith, Barham W.; Smith, Kathy; Starkovich, Kim; Theiler, James P.; Weber, Paul G.
2001-08-01
The mission of the Multispectral Thermal Imager (MTI) satellite is to demonstrate the efficacy of highly accurate multispectral imaging for passive characterization of urban and industrial areas, as well as sites of environmental interest. The satellite makes top-of-atmosphere radiance measurements that are subsequently processed into estimates of surface properties such as vegetation health, temperatures, material composition and others. The MTI satellite also provides simultaneous data for atmospheric characterization at high spatial resolution. To utilize these data the MTI science program has several coordinated components, including modeling, comprehensive ground-truth measurements, image acquisition planning, data processing and data interpretation and analysis. Algorithms have been developed to retrieve a multitude of physical quantities and these algorithms are integrated in a processing pipeline architecture that emphasizes automation, flexibility and programmability. In addition, the MTI science team has produced detailed site, system and atmospheric models to aid in system design and data analysis. This paper provides an overview of the MTI research objectives, data products and ground data processing.
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina
2012-03-01
Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.
NASA Astrophysics Data System (ADS)
Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing
2018-02-01
Alzheimer's Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient's likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called "ADMultiImg" to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.
Reflectometric measurement of plasma imaging and applications
NASA Astrophysics Data System (ADS)
Mase, A.; Ito, N.; Oda, M.; Komada, Y.; Nagae, D.; Zhang, D.; Kogi, Y.; Tobimatsu, S.; Maruyama, T.; Shimazu, H.; Sakata, E.; Sakai, F.; Kuwahara, D.; Yoshinaga, T.; Tokuzawa, T.; Nagayama, Y.; Kawahata, K.; Yamaguchi, S.; Tsuji-Iio, S.; Domier, C. W.; Luhmann, N. C., Jr.; Park, H. K.; Yun, G.; Lee, W.; Padhi, S.; Kim, K. W.
2012-01-01
Progress in microwave and millimeter-wave technologies has made possible advanced diagnostics for application to various fields, such as, plasma diagnostics, radio astronomy, alien substance detection, airborne and spaceborne imaging radars called as synthetic aperture radars, living body measurements. Transmission, reflection, scattering, and radiation processes of electromagnetic waves are utilized as diagnostic tools. In this report we focus on the reflectometric measurements and applications to biological signals (vital signal detection and breast cancer detection) as well as plasma diagnostics, specifically by use of imaging technique and ultra-wideband radar technique.
Biology and therapy of fibromyalgia. Functional magnetic resonance imaging findings in fibromyalgia
Williams, David A; Gracely, Richard H
2006-01-01
Techniques in neuroimaging such as functional magnetic resonance imaging (fMRI) have helped to provide insights into the role of supraspinal mechanisms in pain perception. This review focuses on studies that have applied fMRI in an attempt to gain a better understanding of the mechanisms involved in the processing of pain associated with fibromyalgia. This article provides an overview of the nociceptive system as it functions normally, reviews functional brain imaging methods, and integrates the existing literature utilizing fMRI to study central pain mechanisms in fibromyalgia. PMID:17254318
Ink-constrained halftoning with application to QR codes
NASA Astrophysics Data System (ADS)
Bayeh, Marzieh; Compaan, Erin; Lindsey, Theodore; Orlow, Nathan; Melczer, Stephen; Voller, Zachary
2014-01-01
This paper examines adding visually significant, human recognizable data into QR codes without affecting their machine readability by utilizing known methods in image processing. Each module of a given QR code is broken down into pixels, which are halftoned in such a way as to keep the QR code structure while revealing aspects of the secondary image to the human eye. The loss of information associated to this procedure is discussed, and entropy values are calculated for examples given in the paper. Numerous examples of QR codes with embedded images are included.
A Multistage Approach for Image Registration.
Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi
2016-09-01
Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua
2017-03-01
In current radiation therapy practice, image quality is still assessed subjectively or by utilizing physically-based metrics. Recently, a methodology for objective task-based image quality (IQ) assessment in radiation therapy was proposed by Barrett et al.1 In this work, we present a comprehensive implementation and evaluation of this new IQ assessment methodology. A modular simulation framework was designed to perform an automated, computer-simulated end-to-end radiation therapy treatment. A fully simulated framework was created that utilizes new learning-based stochastic object models (SOM) to obtain known organ boundaries, generates a set of images directly from the numerical phantoms created with the SOM, and automates the image segmentation and treatment planning steps of a radiation therapy work ow. By use of this computational framework, therapeutic operating characteristic (TOC) curves can be computed and the area under the TOC curve (AUTOC) can be employed as a figure-of-merit to guide optimization of different components of the treatment planning process. The developed computational framework is employed to optimize X-ray CT pre-treatment imaging. We demonstrate that use of the radiation therapy-based-based IQ measures lead to different imaging parameters than obtained by use of physical-based measures.
Temporally flickering nanoparticles for compound cellular imaging and super resolution
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Danan, Yossef; Meir, Rinat; Meiri, Amihai; Zalevsky, Zeev
2016-03-01
This work presents the use of flickering nanoparticles for imaging biological samples. The method has high noise immunity, and it enables the detection of overlapping types of GNPs, at significantly sub-diffraction distances, making it attractive for super resolving localization microscopy techniques. The method utilizes a lock-in technique at which the imaging of the sample is done using a time-modulated laser beam that match the number of the types of gold nanoparticles (GNPs) that label a given sample, and resulting in the excitation of the temporal flickering of the scattered light at known temporal frequencies. The final image where the GNPs are spatially separated is obtained using post processing where the proper spectral components corresponding to the different modulation frequencies are extracted. This allows the simultaneous super resolved imaging of multiple types of GNPs that label targets of interest within biological samples. Additionally applying the post-processing algorithm of the K-factor image decomposition algorithm can further improve the performance of the proposed approach.
Potential of PET-MRI for imaging of non-oncologic musculoskeletal disease.
Kogan, Feliks; Fan, Audrey P; Gold, Garry E
2016-12-01
Early detection of musculoskeletal disease leads to improved therapies and patient outcomes, and would benefit greatly from imaging at the cellular and molecular level. As it becomes clear that assessment of multiple tissues and functional processes are often necessary to study the complex pathogenesis of musculoskeletal disorders, the role of multi-modality molecular imaging becomes increasingly important. New positron emission tomography-magnetic resonance imaging (PET-MRI) systems offer to combine high-resolution MRI with simultaneous molecular information from PET to study the multifaceted processes involved in numerous musculoskeletal disorders. In this article, we aim to outline the potential clinical utility of hybrid PET-MRI to these non-oncologic musculoskeletal diseases. We summarize current applications of PET molecular imaging in osteoarthritis (OA), rheumatoid arthritis (RA), metabolic bone diseases and neuropathic peripheral pain. Advanced MRI approaches that reveal biochemical and functional information offer complementary assessment in soft tissues. Additionally, we discuss technical considerations for hybrid PET-MR imaging including MR attenuation correction, workflow, radiation dose, and quantification.
TASI: A software tool for spatial-temporal quantification of tumor spheroid dynamics.
Hou, Yue; Konen, Jessica; Brat, Daniel J; Marcus, Adam I; Cooper, Lee A D
2018-05-08
Spheroid cultures derived from explanted cancer specimens are an increasingly utilized resource for studying complex biological processes like tumor cell invasion and metastasis, representing an important bridge between the simplicity and practicality of 2-dimensional monolayer cultures and the complexity and realism of in vivo animal models. Temporal imaging of spheroids can capture the dynamics of cell behaviors and microenvironments, and when combined with quantitative image analysis methods, enables deep interrogation of biological mechanisms. This paper presents a comprehensive open-source software framework for Temporal Analysis of Spheroid Imaging (TASI) that allows investigators to objectively characterize spheroid growth and invasion dynamics. TASI performs spatiotemporal segmentation of spheroid cultures, extraction of features describing spheroid morpho-phenotypes, mathematical modeling of spheroid dynamics, and statistical comparisons of experimental conditions. We demonstrate the utility of this tool in an analysis of non-small cell lung cancer spheroids that exhibit variability in metastatic and proliferative behaviors.
Hybrid vision activities at NASA Johnson Space Center
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1990-01-01
NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.
Direct Observations of Graphene Dispersed in Solution by Twilight Fluorescence Microscopy.
Matsuno, Yutaka; Sato, Yu-Uya; Sato, Hikaru; Sano, Masahito
2017-06-01
Graphene and graphene oxide (GO) in solution were directly observed by a newly developed twilight fluorescence (TwiF) microscopy. A nanocarbon dispersion was mixed with a highly concentrated fluorescent dye solution and placed in a cell with a viewing glass at the bottom. TwiF microscopy images the nanocarbon material floating within a few hundred μm of the glass surface by utilizing two optical processes to provide a faintly illuminating backlight and visualizes GO as either a dark image by absorption and energy transfer processes or a bright image by alternation of fluorophore chemistry and autofluorescence. Individual graphene and GO sheets ranging from submicron to submillimeter widths were clearly imaged at different wavelengths, which were selectable based on the dye used. Graphene could be differentiated from GO coexisting in the same solution. Partial transparency revealed layering and network structures. Motions in tumbling flow were recognized in real time. An effect of changing the solvent and the process of adhesion on the glass surface were followed in situ.
NASA Technical Reports Server (NTRS)
Blackwell, R. J.
1982-01-01
Remote sensing data analysis of water quality monitoring is evaluated. Data anaysis and image processing techniques are applied to LANDSAT remote sensing data to produce an effective operational tool for lake water quality surveying and monitoring. Digital image processing and analysis techniques were designed, developed, tested, and applied to LANDSAT multispectral scanner (MSS) data and conventional surface acquired data. Utilization of these techniques facilitates the surveying and monitoring of large numbers of lakes in an operational manner. Supervised multispectral classification, when used in conjunction with surface acquired water quality indicators, is used to characterize water body trophic status. Unsupervised multispectral classification, when interpreted by lake scientists familiar with a specific water body, yields classifications of equal validity with supervised methods and in a more cost effective manner. Image data base technology is used to great advantage in characterizing other contributing effects to water quality. These effects include drainage basin configuration, terrain slope, soil, precipitation and land cover characteristics.
Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde
2017-10-01
Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.
The impact of functional imaging on radiation medicine.
Sharma, Nidhi; Neumann, Donald; Macklis, Roger
2008-09-15
Radiation medicine has previously utilized planning methods based primarily on anatomic and volumetric imaging technologies such as CT (Computerized Tomography), ultrasound, and MRI (Magnetic Resonance Imaging). In recent years, it has become apparent that a new dimension of non-invasive imaging studies may hold great promise for expanding the utility and effectiveness of the treatment planning process. Functional imaging such as PET (Positron Emission Tomography) studies and other nuclear medicine based assays are beginning to occupy a larger place in the oncology imaging world. Unlike the previously mentioned anatomic imaging methodologies, functional imaging allows differentiation between metabolically dead and dying cells and those which are actively metabolizing. The ability of functional imaging to reproducibly select viable and active cell populations in a non-invasive manner is now undergoing validation for many types of tumor cells. Many histologic subtypes appear amenable to this approach, with impressive sensitivity and selectivity reported. For clinical radiation medicine, the ability to differentiate between different levels and types of metabolic activity allows the possibility of risk based focal treatments in which the radiation doses and fields are more tightly connected to the perceived risk of recurrence or progression at each location. This review will summarize many of the basic principles involved in the field of functional PET imaging for radiation oncology planning and describe some of the major relevant published data behind this expanding trend.
NASA Technical Reports Server (NTRS)
Leberl, Franz; Karspeck, Milan; Millot, Michel; Maurice, Kelly; Jackson, Matt
1992-01-01
This final report summarizes the work done from mid-1989 until January 1992 to develop a prototype set of tools for the analysis of EOS-type images. Such images are characterized by great multiplicity and quantity. A single 'snapshot' of EOS-type imagery may contain several hundred component images so that on a particular pixel, one finds multiple gray values. A prototype EOS-sensor, AVIRIS, has 224 gray values at each pixel. The work focused on the ability to utilize very large images and continuously roam through those images, zoom and be able to hold more than one black and white or color image, for example for stereo viewing or for image comparisons. A second focus was the utilization of so-called 'image cubes', where multiple images need to be co-registered and then jointly analyzed, viewed, and manipulated. The target computer platform that was selected was a high-performance graphics superworkstation, Stardent 3000. This particular platform offered many particular graphics tools such as the Application Visualization System (AVS) or Dore, but it missed availability of commercial third-party software for relational data bases, image processing, etc. The project was able to cope with these limitations and a phase-3 activity is currently being negotiated to port the software and enhance it for use with a novel graphics superworkstation to be introduced into the market in the Spring of 1993.
Lens-based wavefront sensorless adaptive optics swept source OCT
NASA Astrophysics Data System (ADS)
Jian, Yifan; Lee, Sujin; Ju, Myeong Jin; Heisler, Morgan; Ding, Weiguang; Zawadzki, Robert J.; Bonora, Stefano; Sarunic, Marinko V.
2016-06-01
Optical coherence tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. Although the axial resolution of OCT system, which is a function of the light source bandwidth, is sufficient to resolve retinal features at a micrometer scale, the lateral resolution is dependent on the delivery optics and is limited by ocular aberrations. Through the combination of wavefront sensorless adaptive optics and the use of dual deformable transmissive optical elements, we present a compact lens-based OCT system at an imaging wavelength of 1060 nm for high resolution retinal imaging. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient’s eyes, and a novel multi-actuator adaptive lens for aberration correction to achieve near diffraction limited imaging performance at the retina. With a parallel processing computational platform, high resolution cross-sectional and en face retinal image acquisition and display was performed in real time. In order to demonstrate the system functionality and clinical utility, we present images of the photoreceptor cone mosaic and other retinal layers acquired in vivo from research subjects.
A CAD system and quality assurance protocol for bone age assessment utilizing digital hand atlas
NASA Astrophysics Data System (ADS)
Gertych, Arakadiusz; Zhang, Aifeng; Ferrara, Benjamin; Liu, Brent J.
2007-03-01
Determination of bone age assessment (BAA) in pediatric radiology is a task based on detailed analysis of patient's left hand X-ray. The current standard utilized in clinical practice relies on a subjective comparison of the hand with patterns in the book atlas. The computerized approach to BAA (CBAA) utilizes automatic analysis of the regions of interest in the hand image. This procedure is followed by extraction of quantitative features sensitive to skeletal development that are further converted to a bone age value utilizing knowledge from the digital hand atlas (DHA). This also allows providing BAA results resembling current clinical approach. All developed methodologies have been combined into one CAD module with a graphical user interface (GUI). CBAA can also improve the statistical and analytical accuracy based on a clinical work-flow analysis. For this purpose a quality assurance protocol (QAP) has been developed. Implementation of the QAP helped to make the CAD more robust and find images that cannot meet conditions required by DHA standards. Moreover, the entire CAD-DHA system may gain further benefits if clinical acquisition protocol is modified. The goal of this study is to present the performance improvement of the overall CAD-DHA system with QAP and the comparison of the CAD results with chronological age of 1390 normal subjects from the DHA. The CAD workstation can process images from local image database or from a PACS server.
Wei, Ning; You, Jia; Friehs, Karl; Flaschel, Erwin; Nattkemper, Tim Wilhelm
2007-08-15
Fermentation industries would benefit from on-line monitoring of important parameters describing cell growth such as cell density and viability during fermentation processes. For this purpose, an in situ probe has been developed, which utilizes a dark field illumination unit to obtain high contrast images with an integrated CCD camera. To test the probe, brewer's yeast Saccharomyces cerevisiae is chosen as the target microorganism. Images of the yeast cells in the bioreactors are captured, processed, and analyzed automatically by means of mechatronics, image processing, and machine learning. Two support vector machine based classifiers are used for separating cells from background, and for distinguishing live from dead cells afterwards. The evaluation of the in situ experiments showed strong correlation between results obtained by the probe and those by widely accepted standard methods. Thus, the in situ probe has been proved to be a feasible device for on-line monitoring of both cell density and viability with high accuracy and stability. (c) 2007 Wiley Periodicals, Inc.
Tang, Joel A.; Dugar, Sneha; Zhong, Guiming; Dalal, Naresh S.; Zheng, Jim P.; Yang, Yong; Fu, Riqiang
2013-01-01
Magnetic resonance imaging provides a noninvasive method for in situ monitoring of electrochemical processes involved in charge/discharge cycling of batteries. Determining how the electrochemical processes become irreversible, ultimately resulting in degraded battery performance, will aid in developing new battery materials and designing better batteries. Here we introduce the use of an alternative in situ diagnostic tool to monitor the electrochemical processes. Utilizing a very large field-gradient in the fringe field of a magnet, stray-field-imaging (STRAFI) technique significantly improves the image resolution. These STRAFI images enable the real time monitoring of the electrodes at a micron level. It is demonstrated by two prototype half-cells, graphite∥Li and LiFePO4∥Li, that the high-resolution 7Li STRAFI profiles allow one to visualize in situ Li-ions transfer between the electrodes during charge/discharge cyclings as well as the formation and changes of irreversible microstructures of the Li components, and particularly reveal a non-uniform Li-ion distribution in the graphite. PMID:24005580
Hassanpour, Saeed; Langlotz, Curtis P
2016-01-01
Imaging utilization has significantly increased over the last two decades, and is only recently showing signs of moderating. To help healthcare providers identify patients at risk for high imaging utilization, we developed a prediction model to recognize high imaging utilizers based on their initial imaging reports. The prediction model uses a machine learning text classification framework. In this study, we used radiology reports from 18,384 patients with at least one abdomen computed tomography study in their imaging record at Stanford Health Care as the training set. We modeled the radiology reports in a vector space and trained a support vector machine classifier for this prediction task. We evaluated our model on a separate test set of 4791 patients. In addition to high prediction accuracy, in our method, we aimed at achieving high specificity to identify patients at high risk for high imaging utilization. Our results (accuracy: 94.0%, sensitivity: 74.4%, specificity: 97.9%, positive predictive value: 87.3%, negative predictive value: 95.1%) show that a prediction model can enable healthcare providers to identify in advance patients who are likely to be high utilizers of imaging services. Machine learning classifiers developed from narrative radiology reports are feasible methods to predict imaging utilization. Such systems can be used to identify high utilizers, inform future image ordering behavior, and encourage judicious use of imaging. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Image Display in Local Database Networks
NASA Astrophysics Data System (ADS)
List, James S.; Olson, Frederick R.
1989-05-01
Dearchival of image data in the form of x-ray film provides a major challenge for radiology departments. In highly active referral environments such as tertiary care hospitals, patients may be referred to multiple clinical subspecialists within a very short time. Each clinical subspecialist frequently requires diagnostic image data to complete the diagnosis. This need for image access often interferes with the normal process of film handling and interpretation, subsequently reducing the efficiency of the department. The concept of creating a local image database on individual nursing stations utilizing the AT&T CommView Results Viewing Station (RVS) is being evaluated. Initial physician acceptance has been favorable. Objective measurements of operational productivity enhancements are in progress.
Hyperspectral image classification based on local binary patterns and PCANet
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang
2018-04-01
Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herchko, S; Ding, G
2016-06-15
Purpose: To develop an accurate, straightforward, and user-independent method for performing light versus radiation field coincidence quality assurance utilizing EPID images, a simple phantom made of readily-accessible materials, and a free software program. Methods: A simple phantom consisting of a blocking tray, graph paper, and high-density wire was constructed. The phantom was used to accurately set the size of a desired light field and imaged on the electronic portal imaging device (EPID). A macro written for use in ImageJ, a free image processing software, was then use to determine the radiation field size utilizing the high density wires on themore » phantom for a pixel to distance calibration. The macro also performs an analysis on the measured radiation field utilizing the tolerances recommended in the AAPM Task Group #142. To verify the accuracy of this method, radiochromic film was used to qualitatively demonstrate agreement between the film and EPID results, and an additional ImageJ macro was used to quantitatively compare the radiation field sizes measured both with the EPID and film images. Results: The results of this technique were benchmarked against film measurements, which have been the gold standard for testing light versus radiation field coincidence. The agreement between this method and film measurements were within 0.5 mm. Conclusion: Due to the operator dependency associated with tracing light fields and measuring radiation fields by hand when using film, this method allows for a more accurate comparison between the light and radiation fields with minimal operator dependency. Removing the need for radiographic or radiochromic film also eliminates a reoccurring cost and increases procedural efficiency.« less
Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data
NASA Astrophysics Data System (ADS)
Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.
2015-04-01
In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.
Fast Fourier transform-based Retinex and alpha-rooting color image enhancement
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; Agaian, Sos S.; Gonzales, Analysa M.
2015-05-01
Efficiency in terms of both accuracy and speed is highly important in any system, especially when it comes to image processing. The purpose of this paper is to improve an existing implementation of multi-scale retinex (MSR) by utilizing the fast Fourier transforms (FFT) within the illumination estimation step of the algorithm to improve the speed at which Gaussian blurring filters were applied to the original input image. In addition, alpha-rooting can be used as a separate technique to achieve a sharper image in order to fuse its results with those of the retinex algorithm for the sake of achieving the best image possible as shown by the values of the considered color image enhancement measure (EMEC).
A FPGA-based architecture for real-time image matching
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo
2013-10-01
Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.
Astronomical Image Processing with Hadoop
NASA Astrophysics Data System (ADS)
Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-07-01
In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.
Magnetic resonance imaging of the pediatric neck: an overview.
Shekdar, Karuna V; Mirsky, David M; Kazahaya, Ken; Bilaniuk, Larissa T
2012-08-01
Evaluation of neck lesions in the pediatric population can be a diagnostic challenge, for which magnetic resonance (MR) imaging is extremely valuable. This article provides an overview of the value and utility of MR imaging in the evaluation of pediatric neck lesions, addressing what the referring clinician requires from the radiologist. Concise descriptions and illustrations of MR imaging findings of commonly encountered pathologic entities in the pediatric neck, including abnormalities of the branchial apparatus, thyroglossal duct anomalies, and neoplastic processes, are given. An approach to establishing a differential diagnosis is provided, and critical points of information are summarized. Copyright © 2012 Elsevier Inc. All rights reserved.
Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.
Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang
2017-03-01
Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor.
Semivariogram Analysis of Bone Images Implemented on FPGA Architectures
Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang
2016-01-01
Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O (n2) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor. PMID:28428829
Karnowski, Karol; Ajduk, Anna; Wieloch, Bartosz; Tamborski, Szymon; Krawiec, Krzysztof; Wojtkowski, Maciej; Szkulmowski, Maciej
2017-06-23
Imaging of living cells based on traditional fluorescence and confocal laser scanning microscopy has delivered an enormous amount of information critical for understanding biological processes in single cells. However, the requirement for a high numerical aperture and fluorescent markers still limits researchers' ability to visualize the cellular architecture without causing short- and long-term photodamage. Optical coherence microscopy (OCM) is a promising alternative that circumvents the technical limitations of fluorescence imaging techniques and provides unique access to fundamental aspects of early embryonic development, without the requirement for sample pre-processing or labeling. In the present paper, we utilized the internal motion of cytoplasm, as well as custom scanning and signal processing protocols, to effectively reduce the speckle noise typical for standard OCM and enable high-resolution intracellular time-lapse imaging. To test our imaging system we used mouse and pig oocytes and embryos and visualized them through fertilization and the first embryonic division, as well as at selected stages of oogenesis and preimplantation development. Because all morphological and morphokinetic properties recorded by OCM are believed to be biomarkers of oocyte/embryo quality, OCM may represent a new chapter in imaging-based preimplantation embryo diagnostics.
Single-Scale Fusion: An Effective Approach to Merging Images.
Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C
2017-01-01
Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.
Dong, Biqin; Almassalha, Luay M.; Stypula-Cyrus, Yolanda; Urban, Ben E.; Chandler, John E.; Nguyen, The-Quyen; Sun, Cheng; Zhang, Hao F.; Backman, Vadim
2016-01-01
Visualizing the nanoscale intracellular structures formed by nucleic acids, such as chromatin, in nonperturbed, structurally and dynamically complex cellular systems, will help expand our understanding of biological processes and open the next frontier for biological discovery. Traditional superresolution techniques to visualize subdiffractional macromolecular structures formed by nucleic acids require exogenous labels that may perturb cell function and change the very molecular processes they intend to study, especially at the extremely high label densities required for superresolution. However, despite tremendous interest and demonstrated need, label-free optical superresolution imaging of nucleotide topology under native nonperturbing conditions has never been possible. Here we investigate a photoswitching process of native nucleotides and present the demonstration of subdiffraction-resolution imaging of cellular structures using intrinsic contrast from unmodified DNA based on the principle of single-molecule photon localization microscopy (PLM). Using DNA-PLM, we achieved nanoscopic imaging of interphase nuclei and mitotic chromosomes, allowing a quantitative analysis of the DNA occupancy level and a subdiffractional analysis of the chromosomal organization. This study may pave a new way for label-free superresolution nanoscopic imaging of macromolecular structures with nucleotide topologies and could contribute to the development of new DNA-based contrast agents for superresolution imaging. PMID:27535934
Automatic Image Processing Workflow for the Keck/NIRC2 Vortex Coronagraph
NASA Astrophysics Data System (ADS)
Xuan, Wenhao; Cook, Therese; Ngo, Henry; Zawol, Zoe; Ruane, Garreth; Mawet, Dimitri
2018-01-01
The Keck/NIRC2 camera, equipped with the vortex coronagraph, is an instrument targeted at the high contrast imaging of extrasolar planets. To uncover a faint planet signal from the overwhelming starlight, we utilize the Vortex Image Processing (VIP) library, which carries out principal component analysis to model and remove the stellar point spread function. To bridge the gap between data acquisition and data reduction, we implement a workflow that 1) downloads, sorts, and processes data with VIP, 2) stores the analysis products into a database, and 3) displays the reduced images, contrast curves, and auxiliary information on a web interface. Both angular differential imaging and reference star differential imaging are implemented in the analysis module. A real-time version of the workflow runs during observations, allowing observers to make educated decisions about time distribution on different targets, hence optimizing science yield. The post-night version performs a standardized reduction after the observation, building up a valuable database that not only helps uncover new discoveries, but also enables a statistical study of the instrument itself. We present the workflow, and an examination of the contrast performance of the NIRC2 vortex with respect to factors including target star properties and observing conditions.
The Goddard Profiling Algorithm (GPROF): Description and Current Applications
NASA Technical Reports Server (NTRS)
Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea
2004-01-01
Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.
Shinbane, Jerold S; Saxon, Leslie A
Advances in imaging technology have led to a paradigm shift from planning of cardiovascular procedures and surgeries requiring the actual patient in a "brick and mortar" hospital to utilization of the digitalized patient in the virtual hospital. Cardiovascular computed tomographic angiography (CCTA) and cardiovascular magnetic resonance (CMR) digitalized 3-D patient representation of individual patient anatomy and physiology serves as an avatar allowing for virtual delineation of the most optimal approaches to cardiovascular procedures and surgeries prior to actual hospitalization. Pre-hospitalization reconstruction and analysis of anatomy and pathophysiology previously only accessible during the actual procedure could potentially limit the intrinsic risks related to time in the operating room, cardiac procedural laboratory and overall hospital environment. Although applications are specific to areas of cardiovascular specialty focus, there are unifying themes related to the utilization of technologies. The virtual patient avatar computer can also be used for procedural planning, computational modeling of anatomy, simulation of predicted therapeutic result, printing of 3-D models, and augmentation of real time procedural performance. Examples of the above techniques are at various stages of development for application to the spectrum of cardiovascular disease processes, including percutaneous, surgical and hybrid minimally invasive interventions. A multidisciplinary approach within medicine and engineering is necessary for creation of robust algorithms for maximal utilization of the virtual patient avatar in the digital medical center. Utilization of the virtual advanced cardiac imaging patient avatar will play an important role in the virtual health care system. Although there has been a rapid proliferation of early data, advanced imaging applications require further assessment and validation of accuracy, reproducibility, standardization, safety, efficacy, quality, cost effectiveness, and overall value to medical care. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm
NASA Astrophysics Data System (ADS)
Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan
2017-12-01
Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.
Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.
NASA Technical Reports Server (NTRS)
Leonard, Desiree M.
1991-01-01
Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.
Kawooya, Michael G.; Pariyo, George; Malwadde, Elsie Kiguli; Byanyima, Rosemary; Kisembo, Harrient
2012-01-01
Objectives: Uganda, has limited health resources and improving performance of personnel involved in imaging is necessary for efficiency. The objectives of the study were to develop and pilot imaging user performance indices, document non-tangible aspects of performance, and propose ways of improving performance. Materials and Methods: This was a cross-sectional survey employing triangulation methodology, conducted in Mulago National Referral Hospital over a period of 3 years from 2005 to 2008. The qualitative study used in-depth interviews, focus group discussions, and self-administered questionnaires, to explore clinicians’ and radiologists’ performancerelated views. Results: The study came up with following indices: appropriate service utilization (ASU), appropriateness of clinician's nonimaging decisions (ANID), and clinical utilization of imaging results (CUI). The ASU, ANID, and CUI were: 94%, 80%, and 97%, respectively. The clinician's requisitioning validity was high (positive likelihood ratio of 10.6) contrasting with a poor validity for detecting those patients not needing imaging (negative likelihood ratio of 0.16). Some requisitions were inappropriate and some requisition and reports lacked detail, clarity, and precision. Conclusion: Clinicians perform well at imaging requisition-decisions but there are issues in imaging requisitioning and reporting that need to be addressed to improve performance. PMID:23230543
NASA Technical Reports Server (NTRS)
Christenson, J. W.; Lachowski, H. M.
1977-01-01
LANDSAT digital multispectral scanner data, in conjunction with supporting ground truth, were investigated to determine their utility in delineation of urban-rural boundaries. The digital data for the metropolitan areas of Washington, D. C.; Austin, Texas; and Seattle, Washingtion; were processed using an interactive image processing system. Processing focused on identification of major land cover types typical of the zone of transition from urban to rural landscape, and definition of their spectral signatures. Census tract boundaries were input into the interactive image processing system along with the LANDSAT single and overlayed multiple date MSS data. Results of this investigation indicate that satellite collected information has a practical application to the problem of urban area delineation and to change detection.
Maximizing Total QoS-Provisioning of Image Streams with Limited Energy Budget
NASA Astrophysics Data System (ADS)
Lee, Wan Yeon; Kim, Kyong Hoon; Ko, Young Woong
To fully utilize the limited battery energy of mobile electronic devices, we propose an adaptive adjustment method of processing quality for multiple image stream tasks running with widely varying execution times. This adjustment method completes the worst-case executions of the tasks with a given budget of energy, and maximizes the total reward value of processing quality obtained during their executions by exploiting the probability distribution of task execution times. The proposed method derives the maximum reward value for the tasks being executable with arbitrary processing quality, and near maximum value for the tasks being executable with a finite number of processing qualities. Our evaluation on a prototype system shows that the proposed method achieves larger reward values, by up to 57%, than the previous method.
Yamamoto, Kyosuke; Togami, Takashi; Yamaguchi, Norio
2017-11-06
Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture-in cooperation with image processing technologies-for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis.
Togami, Takashi; Yamaguchi, Norio
2017-01-01
Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture—in cooperation with image processing technologies—for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis. PMID:29113104
Digital Longitudinal Tomosynthesis
NASA Astrophysics Data System (ADS)
Rimkus, Daniel Steven
1985-12-01
The purpose of this dissertation was to investigate the clinical utility of digital longitudinal tomosynthesis in radiology. By acquiring a finite group of digital images during a longitudinal tomographic exposure, and processing these images, tomographic planes, other than the fulcrum plane, can be reconstructed. This process is now termed "tomosynthesis". A prototype system utilizing this technique was developed. Both phantom and patient studies were done with this system. The phantom studies were evaluated by subjective, visual criterion and by quantitative analysis of edge sharpness and noise in the reconstructions. Two groups of patients and one volunteer were studied. The first patient group consisted of 8 patients undergoing intravenous urography (IVU). These patients had digital tomography and film tomography of the abdomen. The second patient group consisted of 4 patients with lung cancer admitted to the hospital for laser resection of endobronchial tumor. These patients had mediastinal digital tomograms to evaluate the trachea and mainstem bronchi. The knee of one volunteer was imaged by film tomography and digital tomography. The results of the phantom studies showed that the digital reconstructions accurately produced images of the desired planes. The edge sharpness of the reconstructions approached that of the acquired images. Adequate reconstructions were achieved with as few as 5 images acquired during the exposure, with the quality of the reconstructions improving as the number of images acquired increased. The IVU patients' digital studies had less contrast and spatial resolution than the film tomograms. The single renal lesion visible on the film tomograms was also visible in the digital images. The digital mediastinal studies were felt by several radiologists to be superior to a standard chest xray in evaluating the airways. The digital images of the volunteer's knee showed many of the same anatomic features as the film tomogram, but the digital images had less spatial and contrast resolution. With the equipment improvements discussed in the thesis, digital tomography may have an important role in radiology.
Multispectral image enhancement processing for microsat-borne imager
NASA Astrophysics Data System (ADS)
Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin
2017-10-01
With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.
Novel image processing method study for a label-free optical biosensor
NASA Astrophysics Data System (ADS)
Yang, Chenhao; Wei, Li'an; Yang, Rusong; Feng, Ying
2015-10-01
Optical biosensor is generally divided into labeled type and label-free type, the former mainly contains fluorescence labeled method and radioactive-labeled method, while fluorescence-labeled method is more mature in the application. The mainly image processing methods of fluorescent-labeled biosensor includes smooth filtering, artificial gridding and constant thresholding. Since some fluorescent molecules may influence the biological reaction, label-free methods have been the main developing direction of optical biosensors nowadays. The using of wider field of view and larger angle of incidence light path which could effectively improve the sensitivity of the label-free biosensor also brought more difficulties in image processing, comparing with the fluorescent-labeled biosensor. Otsu's method is widely applied in machine vision, etc, which choose the threshold to minimize the intraclass variance of the thresholded black and white pixels. It's capacity-constrained with the asymmetrical distribution of images as a global threshold segmentation. In order to solve the irregularity of light intensity on the transducer, we improved the algorithm. In this paper, we present a new image processing algorithm based on a reflectance modulation biosensor platform, which mainly comprises the design of sliding normalization algorithm for image rectification and utilizing the improved otsu's method for image segmentation, in order to implement automatic recognition of target areas. Finally we used adaptive gridding method extracting the target parameters for analysis. Those methods could improve the efficiency of image processing, reduce human intervention, enhance the reliability of experiments and laid the foundation for the realization of high throughput of label-free optical biosensors.
Plenoptic Ophthalmoscopy: A Novel Imaging Technique.
Adam, Murtaza K; Aenchbacher, Weston; Kurzweg, Timothy; Hsu, Jason
2016-11-01
This prospective retinal imaging case series was designed to establish feasibility of plenoptic ophthalmoscopy (PO), a novel mydriatic fundus imaging technique. A custom variable intensity LED array light source adapter was created for the Lytro Gen1 light-field camera (Lytro, Mountain View, CA). Initial PO testing was performed on a model eye and rabbit fundi. PO image acquisition was then performed on dilated human subjects with a variety of retinal pathology and images were subjected to computational enhancement. The Lytro Gen1 light-field camera with custom LED array captured fundus images of eyes with diabetic retinopathy, age-related macular degeneration, retinal detachment, and other diagnoses. Post-acquisition computational processing allowed for refocusing and perspective shifting of retinal PO images, resulting in improved image quality. The application of PO to image the ocular fundus is feasible. Additional studies are needed to determine its potential clinical utility. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:1038-1043.]. Copyright 2016, SLACK Incorporated.
Real-time field programmable gate array architecture for computer vision
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2001-01-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low-level image processing. The field programmable gate array (FPGA)-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and it is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on dedicated very- large-scale-integrated devices to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real-time performance are discussed. Some results are presented and discussed.
Finding-specific display presets for computed radiography soft-copy reading.
Andriole, K P; Gould, R G; Webb, W R
1999-05-01
Much work has been done to optimize the display of cross-sectional modality imaging examinations for soft-copy reading (i.e., window/level tissue presets, and format presentations such as tile and stack modes, four-on-one, nine-on-one, etc). Less attention has been paid to the display of digital forms of the conventional projection x-ray. The purpose of this study is to assess the utility of providing presets for computed radiography (CR) soft-copy display, based not on the window/level settings, but on processing applied to the image optimized for visualization of specific findings, pathologies, etc (i.e., pneumothorax, tumor, tube location). It is felt that digital display of CR images based on finding-specific processing presets has the potential to: speed reading of digital projection x-ray examinations on soft copy; improve diagnostic efficacy; standardize display across examination type, clinical scenario, important key findings, and significant negatives; facilitate image comparison; and improve confidence in and acceptance of soft-copy reading. Clinical chest images are acquired using an Agfa-Gevaert (Mortsel, Belgium) ADC 70 CR scanner and Fuji (Stamford, CT) 9000 and AC2 CR scanners. Those demonstrating pertinent findings are transferred over the clinical picture archiving and communications system (PACS) network to a research image processing station (Agfa PS5000), where the optimal image-processing settings per finding, pathologic category, etc, are developed in conjunction with a thoracic radiologist, by manipulating the multiscale image contrast amplification (Agfa MUSICA) algorithm parameters. Soft-copy display of images processed with finding-specific settings are compared with the standard default image presentation for 50 cases of each category. Comparison is scored using a 5-point scale with the positive scale denoting the standard presentation is preferred over the finding-specific processing, the negative scale denoting the finding-specific processing is preferred over the standard presentation, and zero denoting no difference. Processing settings have been developed for several findings including pneumothorax and lung nodules, and clinical cases are currently being collected in preparation for formal clinical trials. Preliminary results indicate a preference for the optimized-processing presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.
Partial differential equation transform — Variational formulation and Fourier analysis
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
Nonlinear partial differential equation (PDE) models are established approaches for image/signal processing, data analysis and surface construction. Most previous geometric PDEs are utilized as low-pass filters which give rise to image trend information. In an earlier work, we introduced mode decomposition evolution equations (MoDEEs), which behave like high-pass filters and are able to systematically provide intrinsic mode functions (IMFs) of signals and images. Due to their tunable time-frequency localization and perfect reconstruction, the operation of MoDEEs is called a PDE transform. By appropriate selection of PDE transform parameters, we can tune IMFs into trends, edges, textures, noise etc., which can be further utilized in the secondary processing for various purposes. This work introduces the variational formulation, performs the Fourier analysis, and conducts biomedical and biological applications of the proposed PDE transform. The variational formulation offers an algorithm to incorporate two image functions and two sets of low-pass PDE operators in the total energy functional. Two low-pass PDE operators have different signs, leading to energy disparity, while a coupling term, acting as a relative fidelity of two image functions, is introduced to reduce the disparity of two energy components. We construct variational PDE transforms by using Euler-Lagrange equation and artificial time propagation. Fourier analysis of a simplified PDE transform is presented to shed light on the filter properties of high order PDE transforms. Such an analysis also offers insight on the parameter selection of the PDE transform. The proposed PDE transform algorithm is validated by numerous benchmark tests. In one selected challenging example, we illustrate the ability of PDE transform to separate two adjacent frequencies of sin(x) and sin(1.1x). Such an ability is due to PDE transform’s controllable frequency localization obtained by adjusting the order of PDEs. The frequency selection is achieved either by diffusion coefficients or by propagation time. Finally, we explore a large number of practical applications to further demonstrate the utility of proposed PDE transform. PMID:22207904
NASA Technical Reports Server (NTRS)
Rickard, D. A.; Bodenheimer, R. E.
1976-01-01
Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.
Reconstruction of biofilm images: combining local and global structural parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk
2014-10-20
Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parametersmore » into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.« less
An Algorithm to Detect the Retinal Region of Interest
NASA Astrophysics Data System (ADS)
Şehirli, E.; Turan, M. K.; Demiral, E.
2017-11-01
Retina is one of the important layers of the eyes, which includes sensitive cells to colour and light and nerve fibers. Retina can be displayed by using some medical devices such as fundus camera, ophthalmoscope. Hence, some lesions like microaneurysm, haemorrhage, exudate with many diseases of the eye can be detected by looking at the images taken by devices. In computer vision and biomedical areas, studies to detect lesions of the eyes automatically have been done for a long time. In order to make automated detections, the concept of ROI may be utilized. ROI which stands for region of interest generally serves the purpose of focusing on particular targets. The main concentration of this paper is the algorithm to automatically detect retinal region of interest belonging to different retinal images on a software application. The algorithm consists of three stages such as pre-processing stage, detecting ROI on processed images and overlapping between input image and obtained ROI of the image.
Apparatus for monitoring crystal growth
Sachs, Emanual M.
1981-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Method of monitoring crystal growth
Sachs, Emanual M.
1982-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Advances in biologically inspired on/near sensor processing
NASA Astrophysics Data System (ADS)
McCarley, Paul L.
1999-07-01
As electro-optic sensors increase in size and frame rate, the data transfer and digital processing resource requirements also increase. In many missions, the spatial area of interest is but a small fraction of the available field of view. Choosing the right region of interest, however, is a challenge and still requires an enormous amount of downstream digital processing resources. In order to filter this ever-increasing amount of data, we look at how nature solves the problem. The Advanced Guidance Division of the Munitions Directorate, Air Force Research Laboratory at Elgin AFB, Florida, has been pursuing research in the are of advanced sensor and image processing concepts based on biologically inspired sensory information processing. A summary of two 'neuromorphic' processing efforts will be presented along with a seeker system concept utilizing this innovative technology. The Neuroseek program is developing a 256 X 256 2-color dual band IRFPA coupled to an optimized silicon CMOS read-out and processing integrated circuit that provides simultaneous full-frame imaging in MWIR/LWIR wavebands along with built-in biologically inspired sensor image processing functions. Concepts and requirements for future such efforts will also be discussed.
Identification of nodes and internodes of chopped biomass stems by Image analysis
USDA-ARS?s Scientific Manuscript database
Separating the morphological components of biomass leads to better handling, more efficient processing as well as value added product generation, as these components vary in their chemical composition and can be preferentially utilized. Nodes and internodes of biomass stems have distinct chemical co...
LAND COVER ASSESSMENT OF INDIGENOUS COMMUNITIES IN THE BOSAWAS REGION OF NICARAGUA
Data derived from remotely sensed images were utilized to conduct land cover assessments of three indigenous communities in northern Nicaragua. Historical land use, present land cover and land cover change processes were all identified through the use of a geographic informat...
Buckler, Andrew J; Liu, Tiffany Ting; Savig, Erica; Suzek, Baris E; Ouellette, M; Danagoulian, J; Wernsing, G; Rubin, Daniel L; Paik, David
2013-08-01
A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.
Image-based automatic recognition of larvae
NASA Astrophysics Data System (ADS)
Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai
2010-08-01
As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.
Observation of FeGe skyrmions by electron phase microscopy with hole-free phase plate
NASA Astrophysics Data System (ADS)
Kotani, Atsuhiro; Harada, Ken; Malac, Marek; Salomons, Mark; Hayashida, Misa; Mori, Shigeo
2018-05-01
We report application of hole-free phase plate (HFPP) to imaging of magnetic skyrmion lattices. Using HFPP imaging, we observed skyrmions in FeGe, and succeeded in obtaining phase contrast images that reflect the sample magnetization distribution. According to the Aharonov-Bohm effect, the electron phase is shifted by the magnetic flux due to sample magnetization. The differential processing of the intensity in a HFPP image allows us to successfully reconstruct the magnetization map of the skyrmion lattice. Furthermore, the calculated phase shift due to the magnetization of the thin film was consistent with that measured by electron holography experiment, which demonstrates that HFPP imaging can be utilized for analysis of magnetic fields and electrostatic potential distribution at the nanoscale.
A unified framework of image latent feature learning on Sina microblog
NASA Astrophysics Data System (ADS)
Wei, Jinjin; Jin, Zhigang; Zhou, Yuan; Zhang, Rui
2015-10-01
Large-scale user-contributed images with texts are rapidly increasing on the social media websites, such as Sina microblog. However, the noise and incomplete correspondence between the images and the texts give rise to the difficulty in precise image retrieval and ranking. In this paper, a hypergraph-based learning framework is proposed for image ranking, which simultaneously utilizes visual feature, textual content and social link information to estimate the relevance between images. Representing each image as a vertex in the hypergraph, complex relationship between images can be reflected exactly. Then updating the weight of hyperedges throughout the hypergraph learning process, the effect of different edges can be adaptively modulated in the constructed hypergraph. Furthermore, the popularity degree of the image is employed to re-rank the retrieval results. Comparative experiments on a large-scale Sina microblog data-set demonstrate the effectiveness of the proposed approach.
Applications of independent component analysis in SAR images
NASA Astrophysics Data System (ADS)
Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping
2009-07-01
The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.
Upadhyay, Jaymin; Geber, Christian; Hargreaves, Richard; Birklein, Frank; Borsook, David
2018-01-01
Assessing clinical pain and metrics related to function or quality of life predominantly relies on patient reported subjective measures. These outcome measures are generally not applicable to the preclinical setting where early signs pointing to analgesic value of a therapy are sought, thus introducing difficulties in animal to human translation in pain research. Evaluating brain function in patients and respective animal model(s) has the potential to characterize mechanisms associated with pain or pain-related phenotypes and thereby provide a means of laboratory to clinic translation. This review summarizes the progress made towards understanding of brain function in clinical and preclinical pain states elucidated using an imaging approach as well as the current level of validity of translational pain imaging. We hypothesize that neuroimaging can describe the central representation of pain or pain phenotypes and yields a basis for the development and selection of clinically relevant animal assays. This approach may increase the probability of finding meaningful new analgesics that can help satisfy the significant unmet medical needs of patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shaping field for deep tissue microscopy
NASA Astrophysics Data System (ADS)
Colon, J.; Lim, H.
2015-05-01
Information capacity of a lossless image-forming system is a conserved property determined by two imaging parameters - the resolution and the field of view (FOV). Adaptive optics improves the former by manipulating the phase, or wavefront, in the pupil plane. Here we describe a homologous approach, namely adaptive field microscopy, which aims to enhance the FOV by controlling the phase, or defocus, in the focal plane. In deep tissue imaging, the useful FOV can be severely limited if the region of interest is buried in a thick sample and not perpendicular to the optic axis. One must acquire many z-scans and reconstruct by post-processing, which exposes tissue to excessive radiation and is also time consuming. We demonstrate the effective FOV can be substantially enhanced by dynamic control of the image plane. Specifically, the tilt of the image plane is continuously adjusted in situ to match the oblique orientation of the sample plane within tissue. The utility of adaptive field microscopy is tested for imaging tissue with non-planar morphology. Ocular tissue of small animals was imaged by two-photon excited fluorescence. Our results show that adaptive field microscopy can utilize the full FOV. The freedom to adjust the image plane to account for the geometrical variations of sample could be extremely useful for 3D biological imaging. Furthermore, it could facilitate rapid surveillance of cellular features within deep tissue while avoiding photo damages, making it suitable for in vivo imaging.
Novelli, M D; Barreto, E; Matos, D; Saad, S S; Borra, R C
1997-01-01
The authors present the experimental results of the computerized quantifying of tissular structures involved in the reparative process of colonic anastomosis performed by manual suture and biofragmentable ring. The quantified variables in this study were: oedema fluid, myofiber tissue, blood vessel and cellular nuclei. An image processing software developed at Laboratório de Informática Dedicado à Odontologia (LIDO) was utilized to quantifying the pathognomonic alterations in the inflammatory process in colonic anastomosis performed in 14 dogs. The results were compared to those obtained through traditional way diagnosis by two pathologists in view of counterproof measures. The criteria for these diagnoses were defined in levels represented by absent, light, moderate and intensive which were compared to analysis performed by the computer. There was significant statistical difference between two techniques: the biofragmentable ring technique exhibited low oedema fluid, organized myofiber tissue and higher number of alongated cellular nuclei in relation to manual suture technique. The analysis of histometric variables through computational image processing was considered efficient and powerful to quantify the main tissular inflammatory and reparative changing.
Bidgood, W D; Bray, B; Brown, N; Mori, A R; Spackman, K A; Golichowski, A; Jones, R H; Korman, L; Dove, B; Hildebrand, L; Berg, M
1999-01-01
To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. The authors introduce the notion of "image acquisition context," the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries.
All-passive pixel super-resolution of time-stretch imaging
Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.
2017-01-01
Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936
Development of inorganic resists for electron beam lithography: Novel materials and simulations
NASA Astrophysics Data System (ADS)
Jeyakumar, Augustin
Electron beam lithography is gaining widespread utilization as the semiconductor industry progresses towards both advanced optical and non-optical lithographic technologies for high resolution patterning. The current resist technologies are based on organic systems that are imaged most commonly through chain scission, networking, or a chemically amplified polarity change in the material. Alternative resists based on inorganic systems were developed and characterized in this research for high resolution electron beam lithography and their interactions with incident electrons were investigated using Monte Carlo simulations. A novel inorganic resist imaging scheme was developed using metal-organic precursors which decompose to form metal oxides upon electron beam irradiation that can serve as inorganic hard masks for hybrid bilayer inorganic-organic imaging systems and also as directly patternable high resolution metal oxide structures. The electron beam imaging properties of these metal-organic materials were correlated to the precursor structure by studying effects such as interactions between high atomic number species and the incident electrons. Optimal single and multicomponent precursors were designed for utilization as viable inorganic resist materials for sub-50nm patterning in electron beam lithography. The electron beam imaging characteristics of the most widely used inorganic resist material, hydrogen silsesquioxane (HSQ), was also enhanced using a dual processing imaging approach with thermal curing as well as a sensitizer catalyzed imaging approach. The interaction between incident electrons and the high atomic number species contained in these inorganic resists was also studied using Monte Carlo simulations. The resolution attainable using inorganic systems as compared to organic systems can be greater for accelerating voltages greater than 50 keV due to minimized lateral scattering in the high density inorganic systems. The effects of loading nanoparticles in an electron beam resist was also investigated using a newly developed hybrid Monte Carlo approach that accounts for multiple components in a solid film. The resolution of the nanocomposite resist process was found to degrade with increasing nanoparticle loading. Finally, the electron beam patterning of self-assembled monolayers, which were found to primarily utilize backscattered electrons from the high atomic number substrate materials to form images, was also investigated and characterized. It was found that backscattered electrons limit the resolution attainable at low incident electron energies.
NASA Astrophysics Data System (ADS)
Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.; Martin, Aiden A.; Depond, Philip J.; Guss, Gabriel M.; Thampy, Vivek; Fong, Anthony Y.; Weker, Johanna Nelson; Stone, Kevin H.; Tassone, Christopher J.; Kramer, Matthew J.; Toney, Michael F.; Van Buuren, Anthony; Matthews, Manyalibo J.
2018-05-01
In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at the Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ˜1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ˜50 × 100 μm area. We also discuss the utility of these measurements for model validation and process improvement.
Calta, Nicholas P; Wang, Jenny; Kiss, Andrew M; Martin, Aiden A; Depond, Philip J; Guss, Gabriel M; Thampy, Vivek; Fong, Anthony Y; Weker, Johanna Nelson; Stone, Kevin H; Tassone, Christopher J; Kramer, Matthew J; Toney, Michael F; Van Buuren, Anthony; Matthews, Manyalibo J
2018-05-01
In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at the Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ∼1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ∼50 × 100 μm area. We also discuss the utility of these measurements for model validation and process improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.
In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at themore » Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ~1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ~50 × 100 μm area. In conclusion, we also discuss the utility of these measurements for model validation and process improvement.« less
Calta, Nicholas P.; Wang, Jenny; Kiss, Andrew M.; ...
2018-05-01
In situ X-ray-based measurements of the laser powder bed fusion (LPBF) additive manufacturing process produce unique data for model validation and improved process understanding. Synchrotron X-ray imaging and diffraction provide high resolution, bulk sensitive information with sufficient sampling rates to probe melt pool dynamics as well as phase and microstructure evolution. Here, we describe a laboratory-scale LPBF test bed designed to accommodate diffraction and imaging experiments at a synchrotron X-ray source during LPBF operation. We also present experimental results using Ti-6Al-4V, a widely used aerospace alloy, as a model system. Both imaging and diffraction experiments were carried out at themore » Stanford Synchrotron Radiation Lightsource. Melt pool dynamics were imaged at frame rates up to 4 kHz with a ~1.1 μm effective pixel size and revealed the formation of keyhole pores along the melt track due to vapor recoil forces. Diffraction experiments at sampling rates of 1 kHz captured phase evolution and lattice contraction during the rapid cooling present in LPBF within a ~50 × 100 μm area. In conclusion, we also discuss the utility of these measurements for model validation and process improvement.« less
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
Specialized CCDs for high-frame-rate visible imaging and UV imaging applications
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Taylor, Gordon C.; Shallcross, Frank V.; Tower, John R.; Lawler, William B.; Harrison, Lorna J.; Socker, Dennis G.; Marchywka, Mike
1993-11-01
This paper reports recent progress by the authors in two distinct charge coupled device (CCD) technology areas. The first technology area is high frame rate, multi-port, frame transfer imagers. A 16-port, 512 X 512, split frame transfer imager and a 32-port, 1024 X 1024, split frame transfer imager are described. The thinned, backside illuminated devices feature on-chip correlated double sampling, buried blooming drains, and a room temperature dark current of less than 50 pA/cm2, without surface accumulation. The second technology area is vacuum ultraviolet (UV) frame transfer imagers. A developmental 1024 X 640 frame transfer imager with 20% quantum efficiency at 140 nm is described. The device is fabricated in a p-channel CCD process, thinned for backside illumination, and utilizes special packaging to achieve stable UV response.
Danforth, Robert A; Peck, Jerry; Hall, Paul
2003-11-01
Complex impacted third molars present potential treatment complications and possible patient morbidity. Objectives of diagnostic imaging are to facilitate diagnosis, decision making, and enhance treatment outcomes. As cases become more complex, advanced multiplane imaging methods allowing for a 3-D view are more likely to meet these objectives than traditional 2-D radiography. Until recently, advanced imaging options were somewhat limited to standard film tomography or medical CT, but development of cone beam volume tomography (CBVT) multiplane 3-D imaging systems specifically for dental use now provides an alternative imaging option. Two cases were utilized to compare the role of CBVT to these other imaging options and to illustrate how multiplane visualization can assist the pretreatment evaluation and decision-making process for complex impacted mandibular third molar cases.
Intelligent image capture of cartridge cases for firearms examiners
NASA Astrophysics Data System (ADS)
Jones, Brett C.; Guerci, Joseph R.
1997-02-01
The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.
Tradeoff between picture element dimensions and noncoherent averaging in side-looking airborne radar
NASA Technical Reports Server (NTRS)
Moore, R. K.
1979-01-01
An experiment was performed in which three synthetic-aperture images and one real-aperture image were successively degraded in spatial resolution, both retaining the same number of independent samples per pixel and using the spatial degradation to allow averaging of different numbers of independent samples within each pixel. The original and degraded images were provided to three interpreters familiar with both aerial photographs and radar images. The interpreters were asked to grade each image in terms of their ability to interpret various specified features on the image. The numerical interpretability grades were then used as a quantitative measure of the utility of the different kinds of image processing and different resolutions. The experiment demonstrated empirically that the interpretability is related exponentially to the SGL volume which is the product of azimuth, range, and gray-level resolution.
NASA Astrophysics Data System (ADS)
Badshah, Amir; Choudhry, Aadil Jaleel; Ullah, Shan
2017-03-01
Industries are moving towards automation in order to increase productivity and ensure quality. Variety of electronic and electromagnetic systems are being employed to assist human operator in fast and accurate quality inspection of products. Majority of these systems are equipped with cameras and rely on diverse image processing algorithms. Information is lost in 2D image, therefore acquiring accurate 3D data from 2D images is an open issue. FAST, SURF and SIFT are well-known spatial domain techniques for features extraction and henceforth image registration to find correspondence between images. The efficiency of these methods is measured in terms of the number of perfect matches found. A novel fast and robust technique for stereo-image processing is proposed. It is based on non-rigid registration using modified normalized phase correlation. The proposed method registers two images in hierarchical fashion using quad-tree structure. The registration process works through global to local level resulting in robust matches even in presence of blur and noise. The computed matches can further be utilized to determine disparity and depth for industrial product inspection. The same can be used in driver assistance systems. The preliminary tests on Middlebury dataset produced satisfactory results. The execution time for a 413 x 370 stereo-pair is 500ms approximately on a low cost DSP.
Visualization and recommendation of large image collections toward effective sensemaking
NASA Astrophysics Data System (ADS)
Gu, Yi; Wang, Chaoli; Nemiroff, Robert; Kao, David; Parra, Denis
2016-03-01
In our daily lives, images are among the most commonly found data which we need to handle. We present iGraph, a graph-based approach for visual analytics of large image collections and their associated text information. Given such a collection, we compute the similarity between images, the distance between texts, and the connection between image and text to construct iGraph, a compound graph representation which encodes the underlying relationships among these images and texts. To enable effective visual navigation and comprehension of iGraph with tens of thousands of nodes and hundreds of millions of edges, we present a progressive solution that offers collection overview, node comparison, and visual recommendation. Our solution not only allows users to explore the entire collection with representative images and keywords but also supports detailed comparison for understanding and intuitive guidance for navigation. The visual exploration of iGraph is further enhanced with the implementation of bubble sets to highlight group memberships of nodes, suggestion of abnormal keywords or time periods based on text outlier detection, and comparison of four different recommendation solutions. For performance speedup, multiple graphics processing units and central processing units are utilized for processing and visualization in parallel. We experiment with two image collections and leverage a cluster driving a display wall of nearly 50 million pixels. We show the effectiveness of our approach by demonstrating experimental results and conducting a user study.
Image based automatic water meter reader
NASA Astrophysics Data System (ADS)
Jawas, N.; Indrianto
2018-01-01
Water meter is used as a tool to calculate water consumption. This tool works by utilizing water flow and shows the calculation result with mechanical digit counter. Practically, in everyday use, an operator will manually check the digit counter periodically. The Operator makes logs of the number shows by water meter to know the water consumption. This manual operation is time consuming and prone to human error. Therefore, in this paper we propose an automatic water meter digit reader from digital image. The digits sequence is detected by utilizing contour information of the water meter front panel.. Then an OCR method is used to get the each digit character. The digit sequence detection is an important part of overall process. It determines the success of overall system. The result shows promising results especially in sequence detection.
Active confocal imaging for visual prostheses
Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli
2014-01-01
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710
A Scientific Workflow Platform for Generic and Scalable Object Recognition on Medical Images
NASA Astrophysics Data System (ADS)
Möller, Manuel; Tuot, Christopher; Sintek, Michael
In the research project THESEUS MEDICO we aim at a system combining medical image information with semantic background knowledge from ontologies to give clinicians fully cross-modal access to biomedical image repositories. Therefore joint efforts have to be made in more than one dimension: Object detection processes have to be specified in which an abstraction is performed starting from low-level image features across landmark detection utilizing abstract domain knowledge up to high-level object recognition. We propose a system based on a client-server extension of the scientific workflow platform Kepler that assists the collaboration of medical experts and computer scientists during development and parameter learning.
Kim, Byungyeon; Park, Byungjun; Lee, Seungrag; Won, Youngjae
2016-01-01
We demonstrated GPU accelerated real-time confocal fluorescence lifetime imaging microscopy (FLIM) based on the analog mean-delay (AMD) method. Our algorithm was verified for various fluorescence lifetimes and photon numbers. The GPU processing time was faster than the physical scanning time for images up to 800 × 800, and more than 149 times faster than a single core CPU. The frame rate of our system was demonstrated to be 13 fps for a 200 × 200 pixel image when observing maize vascular tissue. This system can be utilized for observing dynamic biological reactions, medical diagnosis, and real-time industrial inspection. PMID:28018724
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dinwiddie, Ralph Barton; Dehoff, Ryan R; Lloyd, Peter D
2013-01-01
Oak Ridge National Laboratory (ORNL) has been utilizing the ARCAM electron beam melting technology to additively manufacture complex geometric structures directly from powder. Although the technology has demonstrated the ability to decrease costs, decrease manufacturing lead-time and fabricate complex structures that are impossible to fabricate through conventional processing techniques, certification of the component quality can be challenging. Because the process involves the continuous deposition of successive layers of material, each layer can be examined without destructively testing the component. However, in-situ process monitoring is difficult due to metallization on inside surfaces caused by evaporation and condensation of metal from themore » melt pool. This work describes a solution to one of the challenges to continuously imaging inside of the chamber during the EBM process. Here, the utilization of a continuously moving Mylar film canister is described. Results will be presented related to in-situ process monitoring and how this technique results in improved mechanical properties and reliability of the process.« less
Remote sensing. [land use mapping
NASA Technical Reports Server (NTRS)
Jinich, A.
1979-01-01
Various imaging techniques are outlined for use in mapping, land use, and land management in Mexico. Among the techniques discussed are pattern recognition and photographic processing. The utilization of information from remote sensing devices on satellites are studied. Multispectral band scanners are examined and software, hardware, and other program requirements are surveyed.
There is a need for more efficient and cost-effective methods for identifying, characterizing and prioritizing chemicals which may result in developmental neurotoxicity. One approach is to utilize in vitro test systems which recapitulate the critical processes of nervous system d...
NASA Astrophysics Data System (ADS)
Manoharan, Kodeeswari; Daniel, Philemon
2017-11-01
This paper presents a robust lane detection technique for roads on hilly terrain. The target of this paper is to utilize image processing strategies to recognize lane lines on structured mountain roads with the help of improved Hough transform. Vision-based approach is used as it performs well in a wide assortment of circumstances by abstracting valuable information contrasted with other sensors. The proposed strategy processes the live video stream, which is a progression of pictures, and concentrates on the position of lane markings in the wake of sending the edges through different channels and legitimate thresholding. The algorithm is tuned for Indian mountainous curved and paved roads. A technique of computation is utilized to discard the disturbing lines other than the credible lane lines and show just the required prevailing lane lines. This technique will consequently discover two lane lines that are nearest to the vehicle in a picture as right on time as could reasonably be expected. Various video sequences on hilly terrain are tested to verify the effectiveness of our method, and it has shown good performance with a detection accuracy of 91.89%.
Use of the electro-separation method for improvement of the utility value of winter rapeseeds
NASA Astrophysics Data System (ADS)
Kovalyshyn, S. J.; Shvets, O. P.; Grundas, S.; Tys, J.
2013-12-01
The paper presents the results of a study of the use of electro-separation methods for improvement of the utility value of 5 winter rapeseed cultivars. The process of electro-separation of rapeseed was conducted on a prototype apparatus built at the Laboratory of Application of Electro-technologies in Agriculture, Lviv National Agriculture University. The process facilitated separation of damaged, low quality seeds from the sowing material. The initial mean level of mechanically damaged seeds in the winter rapeseed cultivars studied varied within the range of 15.8-20.1%. Verification of the amount of seeds with mechanical damage was performed on X-ray images of seeds acquired by means of a digital X-ray apparatus. In the course of analysis of the X-ray images, it was noted that the mean level of mechanical damage to the seeds after the electro-separation was in the range of 2.1-3.8%. The application of the method of separation of rapeseeds in the corona discharge field yielded a significant reduction of the level of seeds with mechanical damage. The application of the method in practice may effectively contribute to improvement of the utility value of sowing material or seed material for production of edible oil.
Refining enamel thickness measurements from B-mode ultrasound images.
Hua, Jeremy; Chen, Ssu-Kuang; Kim, Yongmin
2009-01-01
Dental erosion has been growing increasingly prevalent with the rise in consumption of heavy starches, sugars, coffee, and acidic beverages. In addition, various disorders, such as Gastroenterological Reflux Disease (GERD), have symptoms of rapid rates of tooth erosion. The measurement of enamel thickness would be important for dentists to assess the progression of enamel loss from all forms of erosion, attrition, and abrasion. Characterizing enamel loss is currently done with various subjective indexes that can be interpreted in different ways by different dentists. Ultrasound has been utilized since the 1960s to determine internal tooth structure, but with mixed results. Via image processing and enhancement, we were able to refine B-mode dental ultrasound images for more accurate enamel thickness measurements. The mean difference between the measured thickness of the occlusal enamel from ultrasound images and corresponding gold standard CT images improved from 0.55 mm to 0.32 mm with image processing (p = 0.033). The difference also improved from 0.62 to 0.53 mm at the buccal/lingual enamel surfaces, but not significantly (p = 0.38).
Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael
1999-01-01
Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229
Software for visualization, analysis, and manipulation of laser scan images
NASA Astrophysics Data System (ADS)
Burnsides, Dennis B.
1997-03-01
The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.
Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation
Kazmi, S. M. Shams; Balial, Satyajit; Dunn, Andrew K.
2014-01-01
Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956
Measuring and imaging diffusion with multiple scan speed image correlation spectroscopy.
Gröner, Nadine; Capoulade, Jérémie; Cremer, Christoph; Wachsmuth, Malte
2010-09-27
The intracellular mobility of biomolecules is determined by transport and diffusion as well as molecular interactions and is crucial for many processes in living cells. Methods of fluorescence microscopy like confocal laser scanning microscopy (CLSM) can be used to characterize the intracellular distribution of fluorescently labeled biomolecules. Fluorescence correlation spectroscopy (FCS) is used to describe diffusion, transport and photo-physical processes quantitatively. As an alternative to FCS, spatially resolved measurements of mobilities can be implemented using a CLSM by utilizing the spatio-temporal information inscribed into the image by the scan process, referred to as raster image correlation spectroscopy (RICS). Here we present and discuss an extended approach, multiple scan speed image correlation spectroscopy (msICS), which benefits from the advantages of RICS, i.e. the use of widely available instrumentation and the extraction of spatially resolved mobility information, without the need of a priori knowledge of diffusion properties. In addition, msICS covers a broad dynamic range, generates correlation data comparable to FCS measurements, and allows to derive two-dimensional maps of diffusion coefficients. We show the applicability of msICS to fluorophores in solution and to free EGFP in living cells.
Kal, Betül Ilhan; Baksi, B Güniz; Dündar, Nesrin; Sen, Bilge Hakan
2007-02-01
The aim of this study was to compare the accuracy of endodontic file lengths after application of various image enhancement modalities. Endodontic files of three different ISO sizes were inserted in 20 single-rooted extracted permanent mandibular premolar teeth and standardized images were obtained. Original digital images were then enhanced using five processing algorithms. Six evaluators measured the length of each file on each image. The measurements from each processing algorithm and each file size were compared using repeated measures ANOVA and Bonferroni tests (P = 0.05). Paired t test was performed to compare the measurements with the true lengths of the files (P = 0.05). All of the processing algorithms provided significantly shorter measurements than the true length of each file size (P < 0.05). The threshold enhancement modality produced significantly higher mean error values (P < 0.05), while there was no significant difference among the other enhancement modalities (P > 0.05). Decrease in mean error value was observed with increasing file size (P < 0.05). Invert, contrast/brightness and edge enhancement algorithms may be recommended for accurate file length measurements when utilizing storage phosphor plates.
Image processing analysis of geospatial uav orthophotos for palm oil plantation monitoring
NASA Astrophysics Data System (ADS)
Fahmi, F.; Trianda, D.; Andayani, U.; Siregar, B.
2018-03-01
Unmanned Aerial Vehicle (UAV) is one of the tools that can be used to monitor palm oil plantation remotely. With the geospatial orthophotos, it is possible to identify which part of the plantation land is fertile for planted crops, means to grow perfectly. It is also possible furthermore to identify less fertile in terms of growth but not perfect, and also part of plantation field that is not growing at all. This information can be easily known quickly with the use of UAV photos. In this study, we utilized image processing algorithm to process the orthophotos for more accurate and faster analysis. The resulting orthophotos image were processed using Matlab including classification of fertile, infertile, and dead palm oil plants by using Gray Level Co-Occurrence Matrix (GLCM) method. The GLCM method was developed based on four direction parameters with specific degrees 0°, 45°, 90°, and 135°. From the results of research conducted with 30 image samples, it was found that the accuracy of the system can be reached by using the features extracted from the matrix as parameters Contras, Correlation, Energy, and Homogeneity.
Phenopix: a R package to process digital images of a vegetation cover
NASA Astrophysics Data System (ADS)
Filippa, Gianluca; Cremonese, Edoardo; Migliavacca, Mirco; Galvagno, Marta; Morra di Cella, Umberto; Richardson, Andrew
2015-04-01
Plant phenology is a globally recognized indicator of the effects of climate change on the terrestrial biosphere. Accordingly, new tools to automatically track the seasonal development of a vegetation cover are becoming available and more and more deployed. Among them, near-continuous digital images are being collected in several networks in the US, Europe, Asia and Australia in a range of different ecosystems, including agricultural lands, deciduous and evergreen forests, and grasslands. The growing scientific interest in vegetation image analysis highlights the need of easy to use, flexible and standardized processing techniques. In this contribution we illustrate a new open source package called "phenopix" written in R language that allows to process images of a vegetation cover. The main features include: (i) define of one or more areas of interest on an image and process pixel information within them, (ii) compute vegetation indexes based on red green and blue channels, (iii) fit a curve to the seasonal trajectory of vegetation indexes and extract relevant dates (aka thresholds) on the seasonal trajectory; (iv) analyze image pixels separately to extract spatially explicit phenological information. The utilities of the package will be illustrated in detail for two subalpine sites, a grassland and a larch stand at about 2000 m in the Italian Western Alps. The phenopix package is a cost free and easy-to-use tool that allows to process digital images of a vegetation cover in a standardized, flexible and reproducible way. The software is available for download at the R forge web site (r-forge.r-project.org/projects/phenopix/).
MIRIADS: miniature infrared imaging applications development system description and operation
NASA Astrophysics Data System (ADS)
Baxter, Christopher R.; Massie, Mark A.; McCarley, Paul L.; Couture, Michael E.
2001-10-01
A cooperative effort between the U.S. Air Force Research Laboratory, Nova Research, Inc., the Raytheon Infrared Operations (RIO) and Optics 1, Inc. has successfully produced a miniature infrared camera system that offers significant real-time signal and image processing capabilities by virtue of its modular design. This paper will present an operational overview of the system as well as results from initial testing of the 'Modular Infrared Imaging Applications Development System' (MIRIADS) configured as a missile early-warning detection system. The MIRIADS device can operate virtually any infrared focal plane array (FPA) that currently exists. Programmable on-board logic applies user-defined processing functions to the real-time digital image data for a variety of functions. Daughterboards may be plugged onto the system to expand the digital and analog processing capabilities of the system. A unique full hemispherical infrared fisheye optical system designed and produced by Optics 1, Inc. is utilized by the MIRIADS in a missile warning application to demonstrate the flexibility of the overall system to be applied to a variety of current and future AFRL missions.
All-CMOS night vision viewer with integrated microdisplay
NASA Astrophysics Data System (ADS)
Goosen, Marius E.; Venter, Petrus J.; du Plessis, Monuko; Faure, Nicolaas M.; Janse van Rensburg, Christo; Rademeyer, Pieter
2014-02-01
The unrivalled integration potential of CMOS has made it the dominant technology for digital integrated circuits. With the advent of visible light emission from silicon through hot carrier electroluminescence, several applications arose, all of which rely upon the advantages of mature CMOS technologies for a competitive edge in a very active and attractive market. In this paper we present a low-cost night vision viewer which employs only standard CMOS technologies. A commercial CMOS imager is utilized for near infrared image capturing with a 128x96 pixel all-CMOS microdisplay implemented to convey the image to the user. The display is implemented in a standard 0.35 μm CMOS process, with no process alterations or post processing. The display features a 25 μm pixel pitch and a 3.2 mm x 2.4 mm active area, which through magnification presents the virtual image to the user equivalent of a 19-inch display viewed from a distance of 3 meters. This work represents the first application of a CMOS microdisplay in a low-cost consumer product.
Error image aware content restoration
NASA Astrophysics Data System (ADS)
Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee
2015-12-01
As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.
High quality image-pair-based deblurring method using edge mask and improved residual deconvolution
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting
2017-04-01
Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.
Albrecht, Jessica; Kopietz, Rainer; Frasnelli, Johannes; Wiesmann, Martin; Hummel, Thomas; Lundström, Johan N.
2009-01-01
Almost every odor we encounter in daily life has the capacity to produce a trigeminal sensation. Surprisingly, few functional imaging studies exploring human neuronal correlates of intranasal trigeminal function exist, and results are to some degree inconsistent. We utilized activation likelihood estimation (ALE), a quantitative voxel-based meta-analysis tool, to analyze functional imaging data (fMRI/PET) following intranasal trigeminal stimulation with carbon dioxide (CO2), a stimulus known to exclusively activate the trigeminal system. Meta-analysis tools are able to identify activations common across studies, thereby enabling activation mapping with higher certainty. Activation foci of nine studies utilizing trigeminal stimulation were included in the meta-analysis. We found significant ALE scores, thus indicating consistent activation across studies, in the brainstem, ventrolateral posterior thalamic nucleus, anterior cingulate cortex, insula, precentral gyrus, as well as in primary and secondary somatosensory cortices – a network known for the processing of intranasal nociceptive stimuli. Significant ALE values were also observed in the piriform cortex, insula, and the orbitofrontal cortex, areas known to process chemosensory stimuli, and in association cortices. Additionally, the trigeminal ALE statistics were directly compared with ALE statistics originating from olfactory stimulation, demonstrating considerable overlap in activation. In conclusion, the results of this meta-analysis map the human neuronal correlates of intranasal trigeminal stimulation with high statistical certainty and demonstrate that the cortical areas recruited during the processing of intranasal CO2 stimuli include those outside traditional trigeminal areas. Moreover, through illustrations of the considerable overlap between brain areas that process trigeminal and olfactory information; these results demonstrate the interconnectivity of flavor processing. PMID:19913573
Automatic specular reflections removal for endoscopic images
NASA Astrophysics Data System (ADS)
Tan, Ke; Wang, Bin; Gao, Yuan
2017-07-01
Endoscopy imaging is utilized to provide a realistic view about the surfaces of organs inside the human body. Owing to the damp internal environment, these surfaces usually have a glossy appearance showing specular reflections. For many computer vision algorithms, the highlights created by specular reflections may become a significant source of error. In this paper, we present a novel method for restoration of the specular reflection regions from a single image. Specular restoration process starts with generating a substitute specular-free image with RPCA method. Then the specular removed image was obtained by taking the binary weighting template of highlight regions as the weighting for merging the original specular image and the substitute image. The modified template was furthermore discussed for the concealment of artificial effects in the edge of specular regions. Experimental results on the removal of the endoscopic image with specular reflections demonstrate the efficiency of the proposed method comparing to the existing methods.
Downie, H F; Adu, M O; Schmidt, S; Otten, W; Dupuy, L X; White, P J; Valentine, T A
2015-07-01
The morphology of roots and root systems influences the efficiency by which plants acquire nutrients and water, anchor themselves and provide stability to the surrounding soil. Plant genotype and the biotic and abiotic environment significantly influence root morphology, growth and ultimately crop yield. The challenge for researchers interested in phenotyping root systems is, therefore, not just to measure roots and link their phenotype to the plant genotype, but also to understand how the growth of roots is influenced by their environment. This review discusses progress in quantifying root system parameters (e.g. in terms of size, shape and dynamics) using imaging and image analysis technologies and also discusses their potential for providing a better understanding of root:soil interactions. Significant progress has been made in image acquisition techniques, however trade-offs exist between sample throughput, sample size, image resolution and information gained. All of these factors impact on downstream image analysis processes. While there have been significant advances in computation power, limitations still exist in statistical processes involved in image analysis. Utilizing and combining different imaging systems, integrating measurements and image analysis where possible, and amalgamating data will allow researchers to gain a better understanding of root:soil interactions. © 2014 John Wiley & Sons Ltd.
Efficient testing methodologies for microcameras in a gigapixel imaging system
NASA Astrophysics Data System (ADS)
Youn, Seo Ho; Marks, Daniel L.; McLaughlin, Paul O.; Brady, David J.; Kim, Jungsang
2013-04-01
Multiscale parallel imaging--based on a monocentric optical design--promises revolutionary advances in diverse imaging applications by enabling high resolution, real-time image capture over a wide field-of-view (FOV), including sport broadcast, wide-field microscopy, astronomy, and security surveillance. Recently demonstrated AWARE-2 is a gigapixel camera consisting of an objective lens and 98 microcameras spherically arranged to capture an image over FOV of 120° by 50°, using computational image processing to form a composite image of 0.96 gigapixels. Since microcameras are capable of individually adjusting exposure, gain, and focus, true parallel imaging is achieved with a high dynamic range. From the integration perspective, manufacturing and verifying consistent quality of microcameras is a key to successful realization of AWARE cameras. We have developed an efficient testing methodology that utilizes a precisely fabricated dot grid chart as a calibration target to extract critical optical properties such as optical distortion, veiling glare index, and modulation transfer function to validate imaging performance of microcameras. This approach utilizes an AWARE objective lens simulator which mimics the actual objective lens but operates with a short object distance, suitable for a laboratory environment. Here we describe the principles of the methodologies developed for AWARE microcameras and discuss the experimental results with our prototype microcameras. Reference Brady, D. J., Gehm, M. E., Stack, R. A., Marks, D. L., Kittle, D. S., Golish, D. R., Vera, E. M., and Feller, S. D., "Multiscale gigapixel photography," Nature 486, 386--389 (2012).
Hardware Implementation of a Bilateral Subtraction Filter
NASA Technical Reports Server (NTRS)
Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven
2009-01-01
A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for its position in the window as well as the pixel value for the central pixel of the window. The absolute difference between these two pixel values is calculated and used as an address in a lookup table. Each processing element has a lookup table, unique for its position in the window, containing the weight coefficients for the Gaussian function for that position. The pixel value is multiplied by the weight, and the outputs of the processing element are the weight and pixel-value weight product. The products and weights are fed to the adder tree. The sum of the products and the sum of the weights are fed to the divider, which computes the sum of products the sum of weights. The output of the divider is denoted the bilateral smoothed image. The smoothing function is a simple weighted average computed over a 3 3 subwindow centered in the 9 9 window. After smoothing, the image is delayed by an additional amount of time needed to match the processing time for computing the bilateral smoothed image. The bilateral smoothed image is then subtracted from the 3 3 smoothed image to produce the final output. The prototype filter as implemented in a commercially available FPGA processes one pixel per clock cycle. Operation at a clock speed of 66 MHz has been demonstrated, and results of a static timing analysis have been interpreted as suggesting that the clock speed could be increased to as much as 100 MHz.
Sensor-based architecture for medical imaging workflow analysis.
Silva, Luís A Bastião; Campos, Samuel; Costa, Carlos; Oliveira, José Luis
2014-08-01
The growing use of computer systems in medical institutions has been generating a tremendous quantity of data. While these data have a critical role in assisting physicians in the clinical practice, the information that can be extracted goes far beyond this utilization. This article proposes a platform capable of assembling multiple data sources within a medical imaging laboratory, through a network of intelligent sensors. The proposed integration framework follows a SOA hybrid architecture based on an information sensor network, capable of collecting information from several sources in medical imaging laboratories. Currently, the system supports three types of sensors: DICOM repository meta-data, network workflows and examination reports. Each sensor is responsible for converting unstructured information from data sources into a common format that will then be semantically indexed in the framework engine. The platform was deployed in the Cardiology department of a central hospital, allowing identification of processes' characteristics and users' behaviours that were unknown before the utilization of this solution.
NASA Astrophysics Data System (ADS)
Verma, Sneha K.; Liu, Brent J.; Gridley, Daila S.; Mao, Xiao W.; Kotha, Nikhil
2015-03-01
In previous years we demonstrated an imaging informatics system designed to support multi-institutional research focused on the utilization of proton radiation for treating spinal cord injury (SCI)-related pain. This year we will demonstrate an update on the system with new modules added to perform image processing on evaluation data using immunhistochemistry methods to observe effects of proton therapy. The overarching goal of the research is to determine the effectiveness of using the proton beam for treating SCI-related neuropathic pain as an alternative to invasive surgical lesioning. The research is a joint collaboration between three major institutes, University of Southern California (data collection/integration and image analysis), Spinal Cord Institute VA Healthcare System, Long Beach (patient subject recruitment), and Loma Linda University and Medical Center (human and preclinical animal studies). The system that we are presenting is one of its kind which is capable of integrating a large range of data types, including text data, imaging data, DICOM objects from proton therapy treatment and pathological data. For multi-institutional studies, keeping data secure and integrated is very crucial. Different kinds of data within the study workflow are generated at different stages and different groups of people who process and analyze them in order to see hidden patterns within healthcare data from a broader perspective. The uniqueness of our system relies on the fact that it is platform independent and web-based which makes it very useful in such a large-scale study.
Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F
2016-03-01
Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ghanta, Sindhu; Shahini Shamsabadi, Salar; Dy, Jennifer; Wang, Ming; Birken, Ralf
2015-04-01
Around 3,000,000 million vehicle miles are annually traveled utilizing the US transportation systems alone. In addition to the road traffic safety, maintaining the road infrastructure in a sound condition promotes a more productive and competitive economy. Due to the significant amounts of financial and human resources required to detect surface cracks by visual inspection, detection of these surface defects are often delayed resulting in deferred maintenance operations. This paper introduces an automatic system for acquisition, detection, classification, and evaluation of pavement surface cracks by unsupervised analysis of images collected from a camera mounted on the rear of a moving vehicle. A Hessian-based multi-scale filter has been utilized to detect ridges in these images at various scales. Post-processing on the extracted features has been implemented to produce statistics of length, width, and area covered by cracks, which are crucial for roadway agencies to assess pavement quality. This process has been realized on three sets of roads with different pavement conditions in the city of Brockton, MA. A ground truth dataset labeled manually is made available to evaluate this algorithm and results rendered more than 90% segmentation accuracy demonstrating the feasibility of employing this approach at a larger scale.
New Directions in 3D Medical Modeling: 3D-Printing Anatomy and Functions in Neurosurgical Planning
Árnadóttir, Íris; Gíslason, Magnús; Ólafsson, Ingvar
2017-01-01
This paper illustrates the feasibility and utility of combining cranial anatomy and brain function on the same 3D-printed model, as evidenced by a neurosurgical planning case study of a 29-year-old female patient with a low-grade frontal-lobe glioma. We herein report the rapid prototyping methodology utilized in conjunction with surgical navigation to prepare and plan a complex neurosurgery. The method introduced here combines CT and MRI images with DTI tractography, while using various image segmentation protocols to 3D model the skull base, tumor, and five eloquent fiber tracts. This 3D model is rapid-prototyped and coregistered with patient images and a reported surgical navigation system, establishing a clear link between the printed model and surgical navigation. This methodology highlights the potential for advanced neurosurgical preparation, which can begin before the patient enters the operation theatre. Moreover, the work presented here demonstrates the workflow developed at the National University Hospital of Iceland, Landspitali, focusing on the processes of anatomy segmentation, fiber tract extrapolation, MRI/CT registration, and 3D printing. Furthermore, we present a qualitative and quantitative assessment for fiber tract generation in a case study where these processes are applied in the preparation of brain tumor resection surgery. PMID:29065569
Breast cancer survivorship program: testing for cross-cultural relevance.
Chung, Lynna K; Cimprich, Bernadine; Janz, Nancy K; Mills-Wisneski, Sharon M
2009-01-01
Taking CHARGE, a theory-based self-management program, was developed to assist women with survivorship concerns that arise after breast cancer treatment. Few such programs have been evaluated for cultural relevance with diverse groups. This study determined the utility and cultural relevance of the program for African American (AA) breast cancer survivors. Two focus groups were held with AA women (n = 13), aged 41 to 72 years, who had completed primary treatment. Focus group participants assessed the program content, format, materials, and the self-regulation process. Content analysis of audiotapes was conducted using an open, focused coding process to identify emergent themes regarding program relevance and topics requiring enhancement and/or further emphasis. Although findings indicated that the program's content was relevant to participants' experiences, AA women identified need for cultural enhancements in spirituality, self-preservation, and positive valuations of body image. Content areas requiring more emphasis included persistent fatigue, competing demands, disclosure, anticipatory guidance, and age-specific concerns about body image/sexuality. Suggested improvements to program materials included portable observation logs, additional resources, more photographs of younger AA women, vivid colors, and images depicting strength. These findings provide the basis for program enhancements to increase the utility and cultural relevance of Taking CHARGE for AA survivors and underscore the importance of evaluating interventions for racially/ethnically diverse groups.
An improved method for pancreas segmentation using SLIC and interactive region merging
NASA Astrophysics Data System (ADS)
Zhang, Liyuan; Yang, Huamin; Shi, Weili; Miao, Yu; Li, Qingliang; He, Fei; He, Wei; Li, Yanfang; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang
2017-03-01
Considering the weak edges in pancreas segmentation, this paper proposes a new solution which integrates more features of CT images by combining SLIC superpixels and interactive region merging. In the proposed method, Mahalanobis distance is first utilized in SLIC method to generate better superpixel images. By extracting five texture features and one gray feature, the similarity measure between two superpixels becomes more reliable in interactive region merging. Furthermore, object edge blocks are accurately addressed by re-segmentation merging process. Applying the proposed method to four cases of abdominal CT images, we segment pancreatic tissues to verify the feasibility and effectiveness. The experimental results show that the proposed method can make segmentation accuracy increase to 92% on average. This study will boost the application process of pancreas segmentation for computer-aided diagnosis system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zoberi, J.
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
Imaging spectroscopy links aspen genotype with below-ground processes at landscape scales
Madritch, Michael D.; Kingdon, Clayton C.; Singh, Aditya; Mock, Karen E.; Lindroth, Richard L.; Townsend, Philip A.
2014-01-01
Fine-scale biodiversity is increasingly recognized as important to ecosystem-level processes. Remote sensing technologies have great potential to estimate both biodiversity and ecosystem function over large spatial scales. Here, we demonstrate the capacity of imaging spectroscopy to discriminate among genotypes of Populus tremuloides (trembling aspen), one of the most genetically diverse and widespread forest species in North America. We combine imaging spectroscopy (AVIRIS) data with genetic, phytochemical, microbial and biogeochemical data to determine how intraspecific plant genetic variation influences below-ground processes at landscape scales. We demonstrate that both canopy chemistry and below-ground processes vary over large spatial scales (continental) according to aspen genotype. Imaging spectrometer data distinguish aspen genotypes through variation in canopy spectral signature. In addition, foliar spectral variation correlates well with variation in canopy chemistry, especially condensed tannins. Variation in aspen canopy chemistry, in turn, is correlated with variation in below-ground processes. Variation in spectra also correlates well with variation in soil traits. These findings indicate that forest tree species can create spatial mosaics of ecosystem functioning across large spatial scales and that these patterns can be quantified via remote sensing techniques. Moreover, they demonstrate the utility of using optical properties as proxies for fine-scale measurements of biodiversity over large spatial scales. PMID:24733949
Firmware Development Improves System Efficiency
NASA Technical Reports Server (NTRS)
Chern, E. James; Butler, David W.
1993-01-01
Most manufacturing processes require physical pointwise positioning of the components or tools from one location to another. Typical mechanical systems utilize either stop-and-go or fixed feed-rate procession to accomplish the task. The first approach achieves positional accuracy but prolongs overall time and increases wear on the mechanical system. The second approach sustains the throughput but compromises positional accuracy. A computer firmware approach has been developed to optimize this point wise mechanism by utilizing programmable interrupt controls to synchronize engineering processes 'on the fly'. This principle has been implemented in an eddy current imaging system to demonstrate the improvement. Software programs were developed that enable a mechanical controller card to transmit interrupts to a system controller as a trigger signal to initiate an eddy current data acquisition routine. The advantages are: (1) optimized manufacturing processes, (2) increased throughput of the system, (3) improved positional accuracy, and (4) reduced wear and tear on the mechanical system.
Graphics Processing Unit Assisted Thermographic Compositing
NASA Technical Reports Server (NTRS)
Ragasa, Scott; Russell, Samuel S.
2012-01-01
Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.
Hawkins, H; Langer, J; Padua, E; Reaves, J
2001-06-01
Activity-based costing (ABC) is a process that enables the estimation of the cost of producing a product or service. More accurate than traditional charge-based approaches, it emphasizes analysis of processes, and more specific identification of both direct and indirect costs. This accuracy is essential in today's healthcare environment, in which managed care organizations necessitate responsible and accountable costing. However, to be successfully utilized, it requires time, effort, expertise, and support. Data collection can be tedious and expensive. By integrating ABC with information management (IM) and systems (IS), organizations can take advantage of the process orientation of both, extend and improve ABC, and decrease resource utilization for ABC projects. In our case study, we have examined the process of a multidisciplinary breast center. We have mapped the constituent activities and established cost drivers. This information has been structured and included in our information system database for subsequent analysis.
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
NASA Astrophysics Data System (ADS)
Leydsman-McGinty, E. I.; Ramsey, R. D.; McGinty, C.
2013-12-01
The Remote Sensing/GIS Laboratory at Utah State University, in cooperation with the United States Environmental Protection Agency, is quantifying impervious surfaces for three watershed sub-basins in Utah. The primary objective of developing watershed-scale quantifications of impervious surfaces is to provide an indicator of potential impacts to wetlands that occur within the Wasatch Front and along the Great Salt Lake. A geospatial layer of impervious surfaces can assist state agencies involved with Utah's Wetlands Program Plan (WPP) in understanding the impacts of impervious surfaces on wetlands, as well as support them in carrying out goals and actions identified in the WPP. The three watershed sub-basins, Lower Bear-Malad, Lower Weber, and Jordan, span the highly urbanized Wasatch Front and are consistent with focal areas in need of wetland monitoring and assessment as identified in Utah's WPP. Geospatial layers of impervious surface currently exist in the form of national and regional land cover datasets; however, these datasets are too coarse to be utilized in fine-scale analyses. In addition, the pixel-based image processing techniques used to develop these coarse datasets have proven insufficient in smaller scale or detailed studies, particularly when applied to high-resolution satellite imagery or aerial photography. Therefore, object-based image analysis techniques are being implemented to develop the geospatial layer of impervious surfaces. Object-based image analysis techniques employ a combination of both geospatial and image processing methods to extract meaningful information from high-resolution imagery. Spectral, spatial, textural, and contextual information is used to group pixels into image objects and then subsequently used to develop rule sets for image classification. eCognition, an object-based image analysis software program, is being utilized in conjunction with one-meter resolution National Agriculture Imagery Program (NAIP) aerial photography from 2011.
A new programming metaphor for image processing procedures
NASA Technical Reports Server (NTRS)
Smirnov, O. M.; Piskunov, N. E.
1992-01-01
Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and thus be used as a component of other factories. A bare-bones prototype of factory programming was implemented under the PcIPS image processing system, and a complete version (on a multitasking platform) is under development.
Extracting paleo-climate signals from sediment laminae: A new, automated image processing method
NASA Astrophysics Data System (ADS)
Gan, S. Q.; Scholz, C. A.
2010-12-01
Lake sediment laminations commonly represent depositional seasonality in lacustrine environments. Their occurrence and quantitative attributes contain various signals of their depositional environment, limnological conditions and climate. However, the identification and measurement of laminae remains a mainly manual process that is not only tedious and labor intensive, but also subjective and error prone. We present a batch method to identify laminae and extract lamina properties automatically and accurately from sediment core images. Our algorithm is focused on image enhancement that improves the signal-to-noise ratio and maximizes and normalizes image contrast. The unique feature of these algorithms is that they are all direction-sensitive, i.e., the algorithms treat images in the horizontal direction and vertical direction differently and independently. The core process of lamina identification is to use a one-dimensional (1-D) lamina identification algorithm to produce a lamina map, and to use image blob analyses and lamina connectivity analyses to aggregate and smash two-dimensional (2-D) lamina data for the best representation of fine-scale stratigraphy in the sediment profile. The primary output datasets of the system are definitions of laminae and primary color values for each pixel and each lamina in the depth direction; other derived datasets can be retrieved at users’ discretion. Sediment core images from Lake Hitchcock , USA and Lake Bosumtwi, Ghana, were used for algorithm development and testing. As a demonstration of the utility of the software, we processed sediment core images from the top of 50 meters of drill core (representing the past ~100 ky) from Lake Bosumtwi, Ghana.
An evaluation of the directed flow graph methodology
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Rajala, S. A.
1984-01-01
The applicability of the Directed Graph Methodology (DGM) to the design and analysis of special purpose image and signal processing hardware was evaluated. A special purpose image processing system was designed and described using DGM. The design, suitable for very large scale integration (VLSI) implements a region labeling technique. Two computer chips were designed, both using metal-nitride-oxide-silicon (MNOS) technology, as well as a functional system utilizing those chips to perform real time region labeling. The system is described in terms of DGM primitives. As it is currently implemented, DGM is inappropriate for describing synchronous, tightly coupled, special purpose systems. The nature of the DGM formalism lends itself more readily to modeling networks of general purpose processors.
On the Implementation of a Land Cover Classification System for SAR Images Using Khoros
NASA Technical Reports Server (NTRS)
Medina Revera, Edwin J.; Espinosa, Ramon Vasquez
1997-01-01
The Synthetic Aperture Radar (SAR) sensor is widely used to record data about the ground under all atmospheric conditions. The SAR acquired images have very good resolution which necessitates the development of a classification system that process the SAR images to extract useful information for different applications. In this work, a complete system for the land cover classification was designed and programmed using the Khoros, a data flow visual language environment, taking full advantages of the polymorphic data services that it provides. Image analysis was applied to SAR images to improve and automate the processes of recognition and classification of the different regions like mountains and lakes. Both unsupervised and supervised classification utilities were used. The unsupervised classification routines included the use of several Classification/Clustering algorithms like the K-means, ISO2, Weighted Minimum Distance, and the Localized Receptive Field (LRF) training/classifier. Different texture analysis approaches such as Invariant Moments, Fractal Dimension and Second Order statistics were implemented for supervised classification of the images. The results and conclusions for SAR image classification using the various unsupervised and supervised procedures are presented based on their accuracy and performance.
Prostate segmentation in MRI using fused T2-weighted and elastography images
NASA Astrophysics Data System (ADS)
Nir, Guy; Sahebjavaher, Ramin S.; Baghani, Ali; Sinkus, Ralph; Salcudean, Septimiu E.
2014-03-01
Segmentation of the prostate in medical imaging is a challenging and important task for surgical planning and delivery of prostate cancer treatment. Automatic prostate segmentation can improve speed, reproducibility and consistency of the process. In this work, we propose a method for automatic segmentation of the prostate in magnetic resonance elastography (MRE) images. The method utilizes the complementary property of the elastogram and the corresponding T2-weighted image, which are obtained from the phase and magnitude components of the imaging signal, respectively. It follows a variational approach to propagate an active contour model based on the combination of region statistics in the elastogram and the edge map of the T2-weighted image. The method is fast and does not require prior shape information. The proposed algorithm is tested on 35 clinical image pairs from five MRE data sets, and is evaluated in comparison with manual contouring. The mean absolute distance between the automatic and manual contours is 1.8mm, with a maximum distance of 5.6mm. The relative area error is 7.6%, and the duration of the segmentation process is 2s per slice.
Towards automated segmentation of cells and cell nuclei in nonlinear optical microscopy.
Medyukhina, Anna; Meyer, Tobias; Schmitt, Michael; Romeike, Bernd F M; Dietzek, Benjamin; Popp, Jürgen
2012-11-01
Nonlinear optical (NLO) imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or two photon excited fluorescence (TPEF) show great potential for biomedical imaging. In order to facilitate the diagnostic process based on NLO imaging, there is need for an automated calculation of quantitative values such as cell density, nucleus-to-cytoplasm ratio, average nuclear size. Extraction of these parameters is helpful for the histological assessment in general and specifically e.g. for the determination of tumor grades. This requires an accurate image segmentation and detection of locations and boundaries of cells and nuclei. Here we present an image processing approach for the detection of nuclei and cells in co-registered TPEF and CARS images. The algorithm developed utilizes the gray-scale information for the detection of the nuclei locations and the gradient information for the delineation of the nuclear and cellular boundaries. The approach reported is capable for an automated segmentation of cells and nuclei in multimodal TPEF-CARS images of human brain tumor samples. The results are important for the development of NLO microscopy into a clinically relevant diagnostic tool. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Advanced Imaging Utilization Trends in Privately Insured Patients From 2007 to 2013.
Horný, Michal; Burgess, James F; Cohen, Alan B
2015-12-01
The aim of the study was to investigate whether the increase in utilization of advanced diagnostic imaging for privately insured patients in 2011 was the beginning of a new trend in imaging utilization growth, or an isolated deviation from the declining trend that began in 2008. We extracted outpatient and inpatient CT, diagnostic ultrasound, MRI, and PET procedures from databases, for the years 2007 to 2013. This study extended previous work, covering 2012 to 2013, using the same methodology. For every year of the study period, we calculated the following: number of procedures per person-year covered by private health insurance; proportion of office and emergency visits that resulted in an imaging session; average payments per procedure; and total payments per person-year covered by private health insurance. Outpatient utilization of CT and PET decreased in both 2012 and 2013; outpatient utilization of MRI mildly increased in 2012, but then decreased in 2013. Outpatient utilization of diagnostic ultrasound showed a very different pattern, increasing throughout the study period. Inpatient utilization of all imaging modalities except PET decreased in both 2012 and 2013. Adjusted payments for all imaging modalities increased in 2012, and then dropped substantially in 2013, except the adjusted payments for diagnostic ultrasound that increased in 2013 again. The trend of increasing utilization of advanced diagnostic imaging seems to be over for some, but not all, imaging modalities. A combination of policy (eg, breast density notification laws), technologic advancement, and wider access seems to be responsible for at least part of an increasing utilization of diagnostic ultrasound. Copyright © 2015 American College of Radiology. All rights reserved.
ERIC Educational Resources Information Center
Lewis, Elise C.
2011-01-01
This study was designed to explore the relationships between users and interactive images. Three factors were identified and provided different perspectives on how users interact with images: image utility, information-need, and images with varying levels of interactivity. The study used a mixed methodology to gain a more comprehensive…
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Ma, Qian; Khademhosseinieh, Bahar; Huang, Eric; Qian, Haoliang; Bakowski, Malina A; Troemel, Emily R; Liu, Zhaowei
2016-08-16
The conventional optical microscope is an inherently two-dimensional (2D) imaging tool. The objective lens, eyepiece and image sensor are all designed to capture light emitted from a 2D 'object plane'. Existing technologies, such as confocal or light sheet fluorescence microscopy have to utilize mechanical scanning, a time-multiplexing process, to capture a 3D image. In this paper, we present a 3D optical microscopy method based upon simultaneously illuminating and detecting multiple focal planes. This is implemented by adding two diffractive optical elements to modify the illumination and detection optics. We demonstrate that the image quality of this technique is comparable to conventional light sheet fluorescent microscopy with the advantage of the simultaneous imaging of multiple axial planes and reduced number of scans required to image the whole sample volume.
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.
Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha
2017-04-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.
A Versatile Mounting Method for Long Term Imaging of Zebrafish Development.
Hirsinger, Estelle; Steventon, Ben
2017-01-26
Zebrafish embryos offer an ideal experimental system to study complex morphogenetic processes due to their ease of accessibility and optical transparency. In particular, posterior body elongation is an essential process in embryonic development by which multiple tissue deformations act together to direct the formation of a large part of the body axis. In order to observe this process by long-term time-lapse imaging it is necessary to utilize a mounting technique that allows sufficient support to maintain samples in the correct orientation during transfer to the microscope and acquisition. In addition, the mounting must also provide sufficient freedom of movement for the outgrowth of the posterior body region without affecting its normal development. Finally, there must be a certain degree in versatility of the mounting method to allow imaging on diverse imaging set-ups. Here, we present a mounting technique for imaging the development of posterior body elongation in the zebrafish D. rerio. This technique involves mounting embryos such that the head and yolk sac regions are almost entirely included in agarose, while leaving out the posterior body region to elongate and develop normally. We will show how this can be adapted for upright, inverted and vertical light-sheet microscopy set-ups. While this protocol focuses on mounting embryos for imaging for the posterior body, it could easily be adapted for the live imaging of multiple aspects of zebrafish development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendonsa, D; Nekoogar, F; Martz, H
This document describes the functionality of every component in the DHS/IDD archival and storage hardware system shown in Fig. 1. The document describes steps by step process of image data being received at LLNL then being processed and made available to authorized personnel and collaborators. Throughout this document references will be made to one of two figures, Fig. 1 describing the elements of the architecture and the Fig. 2 describing the workflow and how the project utilizes the available hardware.
Super Resolution Imaging of the Bottomside Ionosphere with the LWA
NASA Astrophysics Data System (ADS)
Obenberger, K.; Parris, R. T.; Taylor, G. B.; Dowell, J.; Malins, J. B.; Pedersen, T.
2017-12-01
Standard ionospheric sounding instruments typically only utilize a handful HF antennas to receive their transmitted signal, and therefore are limited in their ability to image reflections from the bottomside ionosphere. This limitation is primarily due to the low signal to noise ratio of only a few receiving elements. However, recent advancements in digital processing have ushered in a new era of many-element radio telescopes, capable of sub degree all-sky imaging in the HF band. The Long Wavelength Array station at Sevilleta National Wildlife Refuge, New Mexico (LWA-SV), which was specifically designed with improved HF performance for imaging bottomside propagation, began observations this year. I will discuss the new capabilities and imaging techniques of LWA-SV, and show some preliminary measurements of small scale ionospheric structure.
NASA Astrophysics Data System (ADS)
Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.
2015-07-01
We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.
Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.
Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas
2017-10-01
We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.
PIZZARO: Forensic analysis and restoration of image and video data.
Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan
2016-07-01
This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schuck, Miller Harry
Automotive head-up displays require compact, bright, and inexpensive imaging systems. In this thesis, a compact head-up display (HUD) utilizing liquid-crystal-on-silicon microdisplay technology is presented from concept to implementation. The thesis comprises three primary areas of HUD research: the specification, design and implementation of a compact HUD optical system, the development of a wafer planarization process to enhance reflective device brightness and light immunity and the design, fabrication and testing of an inexpensive 640 x 512 pixel active matrix backplane intended to meet the HUD requirements. The thesis addresses the HUD problem at three levels, the systems level, the device level, and the materials level. At the systems level, the optical design of an automotive HUD must meet several competing requirements, including high image brightness, compact packaging, video-rate performance, and low cost. An optical system design which meets the competing requirements has been developed utilizing a fully-reconfigurable reflective microdisplay. The design consists of two optical stages, the first a projector stage which magnifies the display, and a second stage which forms the virtual image eventually seen by the driver. A key component of the optical system is a diffraction grating/field lens which forms a large viewing eyebox while reducing the optical system complexity. Image quality biocular disparity and luminous efficacy were analyzed and results of the optical implementation are presented. At the device level, the automotive HUD requires a reconfigurable, video-rate, high resolution image source for applications such as navigation and night vision. The design of a 640 x 512 pixel active matrix backplane which meets the requirements of the HUD is described. The backplane was designed to produce digital field sequential color images at video rates utilizing fast switching liquid crystal as the modulation layer. The design methodology is discussed, and the example of a clock generator is described from design to implementation. Electrical and optical test results of the fabricated backplane are presented. At the materials level, a planarization method was developed to meet the stringent brightness requirements of automotive HUD's. The research efforts described here have resulted in a simple, low cost post-processing method for planarizing microdisplay substrates based on a spin-cast polymeric resin, benzocyclobutene (BCB). Six- fold reductions in substrate step height were accomplished with a single coating. Via masking and dry etching methods were developed. High reflectivity metal was deposited and patterned over the planarized substrate to produce high aperture pixel mirrors. The process is simple, rapid, and results in microdisplays better able to meet the stringent requirements of high brightness display systems. Methods and results of the post- processing are described.
Solid models for CT/MR image display: accuracy and utility in surgical planning
NASA Astrophysics Data System (ADS)
Mankovich, Nicholas J.; Yue, Alvin; Ammirati, Mario; Kioumehr, Farhad; Turner, Scott
1991-05-01
Medical imaging can now take wider advantage of Computer-Aided-Manufacturing through rapid prototyping technologies (RPT) such as stereolithography, laser sintering, and laminated object manufacturing to directly produce solid models of patient anatomy from processed CT and MR images. While conventional surgical planning relies on consultation with the radiologist combined with direct reading and measurement of CT and MR studies, 3-D surface and volumetric display workstations are providing a more easily interpretable view of patient anatomy. RPT can provide the surgeon with a life size model of patient anatomy constructed layer by layer with full internal detail. Although this life-size anatomic model is more easily understandable by the surgeon, its accuracy and true surgical utility remain untested. We have developed a prototype image processing and model fabrication system based on stereolithography, which provides the neurosurgeon with models of the skull base. Parallel comparison of the model with the original thresholded CT data and with a CRT displayed surface rendering showed that both have an accuracy of 99.6 percent. Because of the ease of exact voxel localization on the model, its precision was high with the standard deviation of measurement of 0.71 percent. The measurements on the surface rendered display proved more difficult to exactly locate and yielded a standard deviation of 2.37 percent. This paper presents our accuracy study and discussed ways of assessing the quality of neurosurgical plans when 3-D models a made available as planning tools.
McCarthy, Jason R.; Weissleder, Ralph
2007-01-01
Background Probes that allow site-specific protein labeling have become critical tools for visualizing biological processes. Methods Here we used phage display to identify a novel peptide sequence with nanomolar affinity for near infrared (NIR) (benz)indolium fluorochromes. The developed peptide sequence (“IQ-tag”) allows detection of NIR dyes in a wide range of assays including ELISA, flow cytometry, high throughput screens, microscopy, and optical in vivo imaging. Significance The described method is expected to have broad utility in numerous applications, namely site-specific protein imaging, target identification, cell tracking, and drug development. PMID:17653285
Apodized RFI filtering of synthetic aperture radar images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin Walter
2014-02-01
Fine resolution Synthetic Aperture Radar (SAR) systems necessarily require wide bandwidths that often overlap spectrum utilized by other wireless services. These other emitters pose a source of Radio Frequency Interference (RFI) to the SAR echo signals that degrades SAR image quality. Filtering, or excising, the offending spectral contaminants will mitigate the interference, but at a cost of often degrading the SAR image in other ways, notably by raising offensive sidelobe levels. This report proposes borrowing an idea from nonlinear sidelobe apodization techniques to suppress interference without the attendant increase in sidelobe levels. The simple post-processing technique is termed Apodized RFImore » Filtering (ARF).« less
Analysis of Orientations of Collagen Fibers by Novel Fiber-Tracking Software
NASA Astrophysics Data System (ADS)
Wu, Jun; Rajwa, Bartlomiej; Filmer, David L.; Hoffmann, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennie; Robinson, J. Paul
2003-12-01
Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to not only its composition but also its structure. This article integrates confocal microscopy imaging and image-processing techniques to analyze the microstructural properties of ECM. This report describes a two- and three-dimensional fiber middle-line tracing algorithm that may be used to quantify collagen fibril organization. We utilized computer simulation and statistical analysis to validate the developed algorithm. These algorithms were applied to confocal images of collagen gels made with reconstituted bovine collagen type I, to demonstrate the computation of orientations of individual fibers.
The NAIMS cooperative pilot project: Design, implementation and future directions.
Oh, Jiwon; Bakshi, Rohit; Calabresi, Peter A; Crainiceanu, Ciprian; Henry, Roland G; Nair, Govind; Papinutto, Nico; Constable, R Todd; Reich, Daniel S; Pelletier, Daniel; Rooney, William; Schwartz, Daniel; Tagge, Ian; Shinohara, Russell T; Simon, Jack H; Sicotte, Nancy L
2017-10-01
The North American Imaging in Multiple Sclerosis (NAIMS) Cooperative represents a network of 27 academic centers focused on accelerating the pace of magnetic resonance imaging (MRI) research in multiple sclerosis (MS) through idea exchange and collaboration. Recently, NAIMS completed its first project evaluating the feasibility of implementation and reproducibility of quantitative MRI measures derived from scanning a single MS patient using a high-resolution 3T protocol at seven sites. The results showed the feasibility of utilizing advanced quantitative MRI measures in multicenter studies and demonstrated the importance of careful standardization of scanning protocols, central image processing, and strategies to account for inter-site variability.
Filler, Aaron
2009-10-01
Methods were invented that made it possible to image peripheral nerves in the body and to image neural tracts in the brain. The history, physical basis, and dyadic tensor concept underlying the methods are reviewed. Over a 15-year period, these techniques-magnetic resonance neurography (MRN) and diffusion tensor imaging-were deployed in the clinical and research community in more than 2500 published research reports and applied to approximately 50,000 patients. Within this group, approximately 5000 patients having MRN were carefully tracked on a prospective basis. A uniform Neurography imaging methodology was applied in the study group, and all images were reviewed and registered by referral source, clinical indication, efficacy of imaging, and quality. Various classes of image findings were identified and subjected to a variety of small targeted prospective outcome studies. Those findings demonstrated to be clinically significant were then tracked in the larger clinical volume data set. MRN demonstrates mechanical distortion of nerves, hyperintensity consistent with nerve irritation, nerve swelling, discontinuity, relations of nerves to masses, and image features revealing distortion of nerves at entrapment points. These findings are often clinically relevant and warrant full consideration in the diagnostic process. They result in specific pathological diagnoses that are comparable to electrodiagnostic testing in clinical efficacy. A review of clinical outcome studies with diffusion tensor imaging also shows convincing utility. MRN and diffusion tensor imaging neural tract imaging have been validated as indispensable clinical diagnostic methods that provide reliable anatomic pathological information. There is no alternative diagnostic method in many situations. With the elapsing of 15 years, tens of thousands of imaging studies, and thousands of publications, these methods should no longer be considered experimental.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
The Portable Dynamic Fundus Instrument: Uses in telemedicine and research
NASA Technical Reports Server (NTRS)
Hunter, Norwood; Caputo, Michael; Billica, Roger; Taylor, Gerald; Gibson, C. Robert; Manuel, F. Keith; Mader, Thomas; Meehan, Richard
1994-01-01
For years ophthalmic photographs have been used to track the progression of many ocular diseases such as macular degeneration and glaucoma as well as the ocular manifestations of diabetes, hypertension, and hypoxia. In 1987 a project was initiated at the Johnson Space Center (JSC) to develop a means of monitoring retinal vascular caliber and intracranial pressure during space flight. To conduct telemedicine during space flight operations, retinal images would require real-time transmissions from space. Film-based images would not be useful during in-flight operations. Video technology is beneficial in flight because the images may be acquired, recorded, and transmitted to the ground for rapid computer digital image processing and analysis. The computer analysis techniques developed for this project detected vessel caliber changes as small as 3 percent. In the field of telemedicine, the Portable Dynamic Fundus Instrument demonstrates the concept and utility of a small, self-contained video funduscope. It was used to record retinal images during the Gulf War and to transmit retinal images from the Space Shuttle Columbia during STS-50. There are plans to utilize this device to provide a mobile ophthalmic screening service in rural Texas. In the fall of 1993 a medical team in Boulder, Colorado, will transmit real-time images of the retina during remote consultation and diagnosis. The research applications of this device include the capability of operating in remote locations or small, confined test areas. There has been interest shown utilizing retinal imaging during high-G centrifuge tests, high-altitude chamber tests, and aircraft flight tests. A new design plan has been developed to incorporate the video instrumentation into face-mounted goggle. This design would eliminate head restraint devices, thus allowing full maneuverability to the subjects. Further development of software programs will broaden the application of the Portable Dynamic Fundus Instrument in telemedicine and medical research.
USDA-ARS?s Scientific Manuscript database
Morphological components of biomass stems vary in their chemical composition and they can be better utilized when processed after segregation. Within the stem, nodes and internodes have significantly different compositions. The internodes have low ash content and are a better feedstock for bioenergy...
NASA Astrophysics Data System (ADS)
Pandey, Palak; Kunte, Pravin D.
2016-10-01
This study presents an easy, modular, user-friendly, and flexible software package for processing of Landsat 7 ETM and Landsat 8 OLI-TIRS data for estimating suspended particulate matter concentrations in the coastal waters. This package includes 1) algorithm developed using freely downloadable SCILAB package, 2) ERDAS Models for iterative processing of Landsat images and 3) ArcMAP tool for plotting and map making. Utilizing SCILAB package, a module is written for geometric corrections, radiometric corrections and obtaining normalized water-leaving reflectance by incorporating Landsat 8 OLI-TIRS and Landsat 7 ETM+ data. Using ERDAS models, a sequence of modules are developed for iterative processing of Landsat images and estimating suspended particulate matter concentrations. Processed images are used for preparing suspended sediment concentration maps. The applicability of this software package is demonstrated by estimating and plotting seasonal suspended sediment concentration maps off the Bengal delta. The software is flexible enough to accommodate other remotely sensed data like Ocean Color monitor (OCM) data, Indian Remote Sensing data (IRS), MODIS data etc. by replacing a few parameters in the algorithm, for estimating suspended sediment concentration in coastal waters.
Automatic glaucoma diagnosis through medical imaging informatics.
Liu, Jiang; Zhang, Zhuo; Wong, Damon Wing Kee; Xu, Yanwu; Yin, Fengshou; Cheng, Jun; Tan, Ngan Meng; Kwoh, Chee Keong; Xu, Dong; Tham, Yih Chung; Aung, Tin; Wong, Tien Yin
2013-01-01
Computer-aided diagnosis for screening utilizes computer-based analytical methodologies to process patient information. Glaucoma is the leading irreversible cause of blindness. Due to the lack of an effective and standard screening practice, more than 50% of the cases are undiagnosed, which prevents the early treatment of the disease. To design an automatic glaucoma diagnosis architecture automatic glaucoma diagnosis through medical imaging informatics (AGLAIA-MII) that combines patient personal data, medical retinal fundus image, and patient's genome information for screening. 2258 cases from a population study were used to evaluate the screening software. These cases were attributed with patient personal data, retinal images and quality controlled genome data. Utilizing the multiple kernel learning-based classifier, AGLAIA-MII, combined patient personal data, major image features, and important genome single nucleotide polymorphism (SNP) features. Receiver operating characteristic curves were plotted to compare AGLAIA-MII's performance with classifiers using patient personal data, images, and genome SNP separately. AGLAIA-MII was able to achieve an area under curve value of 0.866, better than 0.551, 0.722 and 0.810 by the individual personal data, image and genome information components, respectively. AGLAIA-MII also demonstrated a substantial improvement over the current glaucoma screening approach based on intraocular pressure. AGLAIA-MII demonstrates for the first time the capability of integrating patients' personal data, medical retinal image and genome information for automatic glaucoma diagnosis and screening in a large dataset from a population study. It paves the way for a holistic approach for automatic objective glaucoma diagnosis and screening.
MO-B-BRC-00: Prostate HDR Treatment Planning - Considering Different Imaging Modalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
A manual for microcomputer image analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rich, P.M.; Ranken, D.M.; George, J.S.
1989-12-01
This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
NASA Technical Reports Server (NTRS)
Traub, W. A.
1984-01-01
The first physical demonstration of the principle of image reconstruction using a set of images from a diffraction-blurred elongated aperture is reported. This is an optical validation of previous theoretical and numerical simulations of the COSMIC telescope array (coherent optical system of modular imaging collectors). The present experiment utilizes 17 diffraction blurred exposures of a laboratory light source, as imaged by a lens covered by a narrow-slit aperture; the aperture is rotated 10 degrees between each exposure. The images are recorded in digitized form by a CCD camera, Fourier transformed, numerically filtered, and added; the sum is then filtered and inverse Fourier transformed to form the final image. The image reconstruction process is found to be stable with respect to uncertainties in values of all physical parameters such as effective wavelength, rotation angle, pointing jitter, and aperture shape. Future experiments will explore the effects of low counting rates, autoguiding on the image, various aperture configurations, and separated optics.
Development of Software to Model AXAF-I Image Quality
NASA Technical Reports Server (NTRS)
Ahmad, Anees; Hawkins, Lamar
1996-01-01
This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.
Pattern centric design based sensitive patterns and process monitor in manufacturing
NASA Astrophysics Data System (ADS)
Hsiang, Chingyun; Cheng, Guojie; Wu, Kechih
2017-03-01
When design rule is mitigating to smaller dimension, process variation requirement is tighter than ever and challenges the limits of device yield. Masks, lithography, etching and other processes have to meet very tight specifications in order to keep defect and CD within the margins of the process window. Conventionally, Inspection and metrology equipments are utilized to monitor and control wafer quality in-line. In high throughput optical inspection, nuisance and review-classification become a tedious labor intensive job in manufacturing. Certain high-resolution SEM images are taken to validate defects after optical inspection. These high resolution SEM images catch not only optical inspection highlighted point, also its surrounding patterns. However, this pattern information is not well utilized in conventional quality control method. Using this complementary design based pattern monitor not only monitors and analyzes the variation of patterns sensitivity but also reduce nuisance and highlight defective patterns or killer defects. After grouping in either single or multiple layers, systematic defects can be identified quickly in this flow. In this paper, we applied design based pattern monitor in different layers to monitor process variation impacts on all kinds of patterns. First, the contour of high resolutions SEM image is extracted and aligned to design with offset adjustment and fine alignment [1]. Second, specified pattern rules can be applied on design clip area, the same size as SEM image, and form POI (pattern of interest) areas. Third, the discrepancy of contour and design measurement at different pattern types in measurement blocks. Fourth, defective patterns are reported by discrepancy detection criteria and pattern grouping [4]. Meanwhile, reported pattern defects are ranked by number and severity by discrepancy. In this step, process sensitive high repeatable systematic defects can be identified quickly Through this design based process pattern monitor method, most of optical inspection nuisances can be filtered out at contour to design discrepancy measurement. Daily analysis results are stored at database as reference to compare with incoming data. Defective pattern library contains existing and known systematic defect patterns which help to catch and identify new pattern defects or process impacts. On the other hand, this defect pattern library provides extra valuable information for mask, pattern and defects verification, inspection care area generation, further OPC fix and process enhancement and investigation.
NASA Astrophysics Data System (ADS)
Peña, Adrian F.; Devine, Jack; Doronin, Alexander; Meglinski, Igor
2014-03-01
We report the use of conventional Optical Coherence Tomography (OCT) for visualization of propagation of low frequency electric field in soft biological tissues ex vivo. To increase the overall quality of the experimental images an adaptive Wiener filtering technique has been employed. Fourier domain correlation has been subsequently applied to enhance spatial resolution of images of biological tissues influenced by low frequency electric field. Image processing has been performed on Graphics Processing Units (GPUs) utilizing Compute Unified Device Architecture (CUDA) framework in the frequencydomain. The results show that variation in voltage and frequency of the applied electric field relates exponentially to the magnitude of its influence on biological tissue. The magnitude of influence is about twice more for fresh tissue samples in comparison to non-fresh ones. The obtained results suggest that OCT can be used for observation and quantitative evaluation of the electro-kinetic changes in biological tissues under different physiological conditions, functional electrical stimulation, and potentially can be used non-invasively for food quality control.
Achieving superresolution with illumination-enhanced sparsity.
Yu, Jiun-Yann; Becker, Stephen R; Folberth, James; Wallin, Bruce F; Chen, Simeng; Cogswell, Carol J
2018-04-16
Recent advances in superresolution fluorescence microscopy have been limited by a belief that surpassing two-fold resolution enhancement of the Rayleigh resolution limit requires stimulated emission or the fluorophore to undergo state transitions. Here we demonstrate a new superresolution method that requires only image acquisitions with a focused illumination spot and computational post-processing. The proposed method utilizes the focused illumination spot to effectively reduce the object size and enhance the object sparsity and consequently increases the resolution and accuracy through nonlinear image post-processing. This method clearly resolves 70nm resolution test objects emitting ~530nm light with a 1.4 numerical aperture (NA) objective, and, when imaging through a 0.5NA objective, exhibits high spatial frequencies comparable to a 1.4NA widefield image, both demonstrating a resolution enhancement above two-fold of the Rayleigh resolution limit. More importantly, we examine how the resolution increases with photon numbers, and show that the more-than-two-fold enhancement is achievable with realistic photon budgets.
Automated Ontology Generation Using Spatial Reasoning
NASA Astrophysics Data System (ADS)
Coalter, Alton; Leopold, Jennifer L.
Recently there has been much interest in using ontologies to facilitate knowledge representation, integration, and reasoning. Correspondingly, the extent of the information embodied by an ontology is increasing beyond the conventional is_a and part_of relationships. To address these requirements, a vast amount of digitally available information may need to be considered when building ontologies, prompting a desire for software tools to automate at least part of the process. The main efforts in this direction have involved textual information retrieval and extraction methods. For some domains extension of the basic relationships could be enhanced further by the analysis of 2D and/or 3D images. For this type of media, image processing algorithms are more appropriate than textual analysis methods. Herein we present an algorithm that, given a collection of 3D image files, utilizes Qualitative Spatial Reasoning (QSR) to automate the creation of an ontology for the objects represented by the images, relating the objects in terms of is_a and part_of relationships and also through unambiguous Relational Connection Calculus (RCC) relations.
Factors That Will Determine Future Utilization Trends in Diagnostic Imaging.
Levin, David C; Rao, Vijay M
2016-08-01
Radiologists are facing uncertain times, and in this kind of environment, strategic planning is important but difficult. In particular, it is hard to know whether future imaging volume will increase, decrease, or stay approximately the same. In this article, the authors discuss a variety of factors that will influence imaging use in the coming years. Some factors will tend to increase imaging use, whereas others will tend to curtail it. Some of these factors will affect individual groups differently, depending on their locations and the circumstances of their practices. Radiologists would be well advised to become aware of and consider these factors as they go about their planning processes. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
PCIPS 2.0: Powerful multiprofile image processing implemented on PCs
NASA Technical Reports Server (NTRS)
Smirnov, O. M.; Piskunov, N. E.
1992-01-01
Over the years, the processing power of personal computers has steadily increased. Now, 386- and 486-based PC's are fast enough for many image processing applications, and inexpensive enough even for amateur astronomers. PCIPS is an image processing system based on these platforms that was designed to satisfy a broad range of data analysis needs, while requiring minimum hardware and providing maximum expandability. It will run (albeit at a slow pace) even on a 80286 with 640K memory, but will take full advantage of bigger memory and faster CPU's. Because the actual image processing is performed by external modules, the system can be easily upgraded by the user for all sorts of scientific data analysis. PCIPS supports large format lD and 2D images in any numeric type from 8-bit integer to 64-bit floating point. The images can be displayed, overlaid, printed and any part of the data examined via an intuitive graphical user interface that employs buttons, pop-up menus, and a mouse. PCIPS automatically converts images between different types and sizes to satisfy the requirements of various applications. PCIPS features an API that lets users develop custom applications in C or FORTRAN. While doing so, a programmer can concentrate on the actual data processing, because PCIPS assumes responsibility for accessing images and interacting with the user. This also ensures that all applications, even custom ones, have a consistent and user-friendly interface. The API is compatible with factory programming, a metaphor for constructing image processing procedures that will be implemented in future versions of the system. Several application packages were created under PCIPS. The basic package includes elementary arithmetics and statistics, geometric transformations and import/export in various formats (FITS, binary, ASCII, and GIF). The CCD processing package and the spectral analysis package were successfully used to reduce spectra from the Nordic Telescope at La Palma. A photometry package is also available, and other packages are being developed. A multitasking version of PCIPS that utilizes the factory programming concept is currently under development. This version will remain compatible (on the source code level) with existing application packages and custom applications.
Enhanced tagging of light utilizing acoustic radiation force with speckle pattern analysis
NASA Astrophysics Data System (ADS)
Vakili, Ali; Hollmann, Joseph L.; Holt, R. Glynn; DiMarzio, Charles A.
2017-10-01
In optical imaging, the depth and resolution are limited due to scattering. Unlike light, scattering of ultrasound (US) waves in tissue is negligible. Hybrid imaging methods such as US-modulated optical tomography (UOT) use the advantages of both modalities. UOT tags light by inducing phase change caused by modulating the local index of refraction of the medium. The challenge in UOT is detecting the small signal. The displacement induced by the acoustic radiation force (ARF) is another US effect that can be utilized to tag the light. It induces greater phase change, resulting in a stronger signal. Moreover, the absorbed acoustic energy generates heat, resulting in change in the index of refraction and a strong phase change. The speckle pattern is governed by the phase of the interfering scattered waves; hence, speckle pattern analysis can obtain information about displacement and temperature changes. We have presented a model to simulate the insonation processes. Simulation results based on fixed-particle Monte Carlo and experimental results show that the signal acquired by utilizing ARF is stronger compared to UOT. The introduced mean irradiance change (MIC) signal reveals both thermal and mechanical effects of the focused US beam in different timescales. Simulation results suggest that variation in the MIC signal can be used to generate a displacement image of the medium.
Inpatient imaging utilization: trends of the past decade.
Shinagare, Atul B; Ip, Ivan K; Abbett, Sarah K; Hanson, Richard; Seltzer, Steven E; Khorasani, Ramin
2014-03-01
We have previously reported inpatient imaging utilization trends at our institution from fiscal year (FY) 1984 through FY 2002. In this study, we assessed the trends in imaging utilization for inpatients from FY 2003 through FY 2012. In this institutional review board-approved retrospective study performed at a 793-bed tertiary care academic institution, we reviewed imaging utilization in adult inpatients from October 1, 2002, through September 30, 2012 (FY 2003 through FY 2012), and recorded the gross number of imaging studies coded by modality (conventional [radiography and fluoroscopy], ultrasound, nuclear medicine, CT, and MRI) and associated relative value units (RVUs). We used linear regression to assess trends in number of imaging studies and RVUs per case-mix-adjusted admission (CMAA). The total number of imaging studies, as well as the number of CT, nuclear medicine, and conventional studies adjusted for case mix, decreased (p=0.02, p=0.0006, p=0.0008, and p=0.001, respectively); CT per CMAA increased until FY 2009 and then decreased through FY 2012. Utilization of ultrasound and MRI did not change significantly (p=0.15 and p=0.22, respectively). Unadjusted global RVUs increased until FY 2009 and then showed a slight decrease through FY 2012 (p=0.04), whereas RVUs per CMAA did not change significantly (p=0.18). After decades of continued rise, imaging utilization for inpatients significantly decreased by most measures between FY 2009 and FY 2012. Future studies to evaluate the contribution of various factors to this decline, including efforts to reduce inappropriate use of imaging and concerns about potential harms of radiation exposure, may be helpful in optimizing imaging utilization and resource planning.
A Flexible Annular-Array Imaging Platform for Micro-Ultrasound
Qiu, Weibao; Yu, Yanyan; Chabok, Hamid Reza; Liu, Cheng; Tsang, Fu Keung; Zhou, Qifa; Shung, K. Kirk; Zheng, Hairong; Sun, Lei
2013-01-01
Micro-ultrasound is an invaluable imaging tool for many clinical and preclinical applications requiring high resolution (approximately several tens of micrometers). Imaging systems for micro-ultrasound, including single-element imaging systems and linear-array imaging systems, have been developed extensively in recent years. Single-element systems are cheaper, but linear-array systems give much better image quality at a higher expense. Annular-array-based systems provide a third alternative, striking a balance between image quality and expense. This paper presents the development of a novel programmable and real-time annular-array imaging platform for micro-ultrasound. It supports multi-channel dynamic beamforming techniques for large-depth-of-field imaging. The major image processing algorithms were achieved by a novel field-programmable gate array technology for high speed and flexibility. Real-time imaging was achieved by fast processing algorithms and high-speed data transfer interface. The platform utilizes a printed circuit board scheme incorporating state-of-the-art electronics for compactness and cost effectiveness. Extensive tests including hardware, algorithms, wire phantom, and tissue mimicking phantom measurements were conducted to demonstrate good performance of the platform. The calculated contrast-to-noise ratio (CNR) of the tissue phantom measurements were higher than 1.2 in the range of 3.8 to 8.7 mm imaging depth. The platform supported more than 25 images per second for real-time image acquisition. The depth-of-field had about 2.5-fold improvement compared to single-element transducer imaging. PMID:23287923
Computed Tomography Window Blending: Feasibility in Thoracic Trauma.
Mandell, Jacob C; Wortman, Jeremy R; Rocha, Tatiana C; Folio, Les R; Andriole, Katherine P; Khurana, Bharti
2018-02-07
This study aims to demonstrate the feasibility of processing computed tomography (CT) images with a custom window blending algorithm that combines soft-tissue, bone, and lung window settings into a single image; to compare the time for interpretation of chest CT for thoracic trauma with window blending and conventional window settings; and to assess diagnostic performance of both techniques. Adobe Photoshop was scripted to process axial DICOM images from retrospective contrast-enhanced chest CTs performed for trauma with a window-blending algorithm. Two emergency radiologists independently interpreted the axial images from 103 chest CTs with both blended and conventional windows. Interpretation time and diagnostic performance were compared with Wilcoxon signed-rank test and McNemar test, respectively. Agreement with Nexus CT Chest injury severity was assessed with the weighted kappa statistic. A total of 13,295 images were processed without error. Interpretation was faster with window blending, resulting in a 20.3% time saving (P < .001), with no difference in diagnostic performance, within the power of the study to detect a difference in sensitivity of 5% as determined by post hoc power analysis. The sensitivity of the window-blended cases was 82.7%, compared to 81.6% for conventional windows. The specificity of the window-blended cases was 93.1%, compared to 90.5% for conventional windows. All injuries of major clinical significance (per Nexus CT Chest criteria) were correctly identified in all reading sessions, and all negative cases were correctly classified. All readers demonstrated near-perfect agreement with injury severity classification with both window settings. In this pilot study utilizing retrospective data, window blending allows faster preliminary interpretation of axial chest CT performed for trauma, with no significant difference in diagnostic performance compared to conventional window settings. Future studies would be required to assess the utility of window blending in clinical practice. Copyright © 2018 The Association of University Radiologists. All rights reserved.
Vest, Joshua R; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B
2015-12-01
Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004-2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = -0.17; 95% confidence interval [CI] = [-0.25, -0.09]; P < .001). However, image sharing technology was associated with a significant increase in any imaging utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Vest, Joshua R.; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B.
2016-01-01
Introduction Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Methods Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004–2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. Results A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = −0.17; 95% confidence interval [CI] = [−0.25, −0.09]; P < .001). However, image sharing technology was associated with a significant increase in any imaging utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Conclusions Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. PMID:26614882
NASA Astrophysics Data System (ADS)
Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang
2010-11-01
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
Intraluminal laser atherectomy with ultrasound and electromagnetic guidance
NASA Astrophysics Data System (ADS)
Gregory, Kenton W.; Aretz, H. Thomas; Martinelli, Michael A.; LeDet, Earl G.; Hatch, G. F.; Gregg, Richard E.; Sedlacek, Tomas; Haase, Wayne C.
1991-05-01
The MagellanTM coronary laser atherectomy system is described. It uses high- resolution ultrasound imaging and electromagnetic sensing to provide real-time guidance and control of laser therapy in the coronary arteries. The system consists of a flexible catheter, an electromagnetic navigation antenna, a sensor signal processor and a computer for image processing and display. The small, flexible catheter combines an ultrasound transducer and laser delivery optics, aimed at the artery wall, and an electromagnetic receiving sensor. An extra-corporeal electromagnetic transmit antenna, in combination with catheter sensors, locates the position of the ultrasound and laser beams in the artery. Navigation and ultrasound data are processed electronically to produce real-time, transverse, and axial cross-section images of the artery wall at selected locations. By exploiting the ability of ultrasound to image beneath the surface of artery walls, it is possible to identify candidate treatment sites and perform safe radial laser debulking of atherosclerotic plaque with reduced danger of perforation. The utility of the system in plaque identification and ablation is demonstrated with imaging and experimental results.
Automatic crack detection and classification method for subway tunnel safety monitoring.
Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun
2014-10-16
Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.
Using a Smartphone Camera for Nanosatellite Attitude Determination
NASA Astrophysics Data System (ADS)
Shimmin, R.
2014-09-01
The PhoneSat project at NASA Ames Research Center has repeatedly flown a commercial cellphone in space. As this project continues, additional utility is being extracted from the cell phone hardware to enable more complex missions. The camera in particular shows great potential as an instrument for position and attitude determination, but this requires complex image processing. This paper outlines progress towards that image processing capability. Initial tests on a small collection of sample images have demonstrated the determination of a Moon vector from an image by automatic thresholding and centroiding, allowing the calibration of existing attitude control systems. Work has been undertaken on a further set of sample images towards horizon detection using a variety of techniques including thresholding, edge detection, applying a Hough transform, and circle fitting. Ultimately it is hoped this will allow calculation of an Earth vector for attitude determination and an approximate altitude. A quick discussion of work towards using the camera as a star tracker is then presented, followed by an introduction to further applications of the camera on space missions.
Sereshti, Hassan; Poursorkh, Zahra; Aliakbarzadeh, Ghazaleh; Zarre, Shahin; Ataolahi, Sahar
2018-01-15
Quality of saffron, a valuable food additive, could considerably affect the consumers' health. In this work, a novel preprocessing strategy for image analysis of saffron thin layer chromatographic (TLC) patterns was introduced. This includes performing a series of image pre-processing techniques on TLC images such as compression, inversion, elimination of general baseline (using asymmetric least squares (AsLS)), removing spots shift and concavity (by correlation optimization warping (COW)), and finally conversion to RGB chromatograms. Subsequently, an unsupervised multivariate data analysis including principal component analysis (PCA) and k-means clustering was utilized to investigate the soil salinity effect, as a cultivation parameter, on saffron TLC patterns. This method was used as a rapid and simple technique to obtain the chemical fingerprints of saffron TLC images. Finally, the separated TLC spots were chemically identified using high-performance liquid chromatography-diode array detection (HPLC-DAD). Accordingly, the saffron quality from different areas of Iran was evaluated and classified. Copyright © 2017 Elsevier Ltd. All rights reserved.
Software for Displaying Data from Planetary Rovers
NASA Technical Reports Server (NTRS)
Powell, Mark; Backers, Paul; Norris, Jeffrey; Vona, Marsette; Steinke, Robert
2003-01-01
Science Activity Planner (SAP) DownlinkBrowser is a computer program that assists in the visualization of processed telemetric data [principally images, image cubes (that is, multispectral images), and spectra] that have been transmitted to Earth from exploratory robotic vehicles (rovers) on remote planets. It is undergoing adaptation to (1) the Field Integrated Design and Operations (FIDO) rover (a prototype Mars-exploration rover operated on Earth as a test bed) and (2) the Mars Exploration Rover (MER) mission. This program has evolved from its predecessor - the Web Interface for Telescience (WITS) software - and surpasses WITS in the processing, organization, and plotting of data. SAP DownlinkBrowser creates Extensible Markup Language (XML) files that organize data files, on the basis of content, into a sortable, searchable product database, without the overhead of a relational database. The data-display components of SAP DownlinkBrowser (descriptively named ImageView, 3DView, OrbitalView, PanoramaView, ImageCubeView, and SpectrumView) are designed to run in a memory footprint of at least 256MB on computers that utilize the Windows, Linux, and Solaris operating systems.
NASA Astrophysics Data System (ADS)
Baca, Michael J.
1990-09-01
A system to display images generated by the Naval Postgraduate School Infrared Search and Target Designation (a modified AN/SAR-8 Advanced Development Model) in near real time was developed using a 33 MHz NIC computer as the central controller. This computer was enhanced with a Data Translation DT2861 Frame Grabber for image processing and an interface board designed and constructed at NPS to provide synchronization between the IRSTD and Frame Grabber. Images are displayed in false color in a video raster format on a 512 by 480 pixel resolution monitor. Using FORTRAN, programs have been written to acquire, unscramble, expand and display a 3 deg sector of data. The time line for acquisition, processing and display has been analyzed and repetition periods of less than four seconds for successive screen displays have been achieved. This represents a marked improvement over previous methods necessitating slower Direct Memory Access transfers of data into the Frame Grabber. Recommendations are made for further improvements to enhance the speed and utility of images produced.
Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring
Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun
2014-01-01
Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification. PMID:25325337
NASA Astrophysics Data System (ADS)
Lee, Jasper C.; Ma, Kevin C.; Liu, Brent J.
2008-03-01
A Data Grid for medical images has been developed at the Image Processing and Informatics Laboratory, USC to provide distribution and fault-tolerant storage of medical imaging studies across Internet2 and public domain. Although back-up policies and grid certificates guarantee privacy and authenticity of grid-access-points, there still lacks a method to guarantee the sensitive DICOM images have not been altered or corrupted during transmission across a public domain. This paper takes steps toward achieving full image transfer security within the Data Grid by utilizing DICOM image authentication and a HIPAA-compliant auditing system. The 3-D lossless digital signature embedding procedure involves a private 64 byte signature that is embedded into each original DICOM image volume, whereby on the receiving end the signature can to be extracted and verified following the DICOM transmission. This digital signature method has also been developed at the IPILab. The HIPAA-Compliant Auditing System (H-CAS) is required to monitor embedding and verification events, and allows monitoring of other grid activity as well. The H-CAS system federates the logs of transmission and authentication events at each grid-access-point and stores it into a HIPAA-compliant database. The auditing toolkit is installed at the local grid-access-point and utilizes Syslog [1], a client-server standard for log messaging over an IP network, to send messages to the H-CAS centralized database. By integrating digital image signatures and centralized logging capabilities, DICOM image integrity within the Medical Imaging and Informatics Data Grid can be monitored and guaranteed without loss to any image quality.
Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D
2007-01-01
Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.
The role of extra-foveal processing in 3D imaging
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.
2017-03-01
The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).
A DICOM-RT based ePR radiation therapy information system for managing brain tumor patients
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Law, Maria; Huang, H. K.; Zee, C. S.; Chan, Lawrence
2005-04-01
The need for comprehensive clinical image data and relevant information in image-guided Radiation Therapy (RT) is becoming steadily apparent. Multiple standalone systems utilizing the most technological advancements in imaging, therapeutic radiation, and computerized treatment planning systems acquire key data during the RT treatment course of a patient. One example are patients treated for brain tumors of greater sizes and irregular shapes that utilize state-of-the-art RT technology to deliver pinpoint accurate radiation doses. One such system, the Cyberknife, is a radiation treatment system that utilizes image-guided information to control a multi-jointed, six degrees of freedom, robotic arm to deliver precise and required radiation dose to the tumor site of a cancer patient. The image-guided system is capable of tracking the lesion orientations with respect to the patient"s position throughout the treatment process. This is done by correlating live radiographic images with pre-operative, CT and MR imaging information to determine relative patient and tumor position repeatedly over the course of the treatment. The disparate and complex data generated by the Cyberknife system along with related data is scattered throughout the RT department compromising an efficient clinical workflow since the data crucial for a clinical decision may be time-consuming to retrieve, temporarily missing, or even lost. To address these shortcomings, the ACR-NEMA Standards Committee extended its DICOM (Digital Imaging & Communications in Medicine) Standard from Radiology to RT by ratifying seven DICOM RT objects starting in 1997. However, they are rarely used by the RT community in daily clinical operations. In the past, the research focus of an RT department has primarily been developing new protocols and devices to improve treatment process and outcomes of cancer patients with minimal effort dedicated to integration of imaging and information systems. Our research, tightly-coupling radiology and RT information systems, represents a new frontier for medical informatics research that has never been previously considered. By combining our past experience in medical imaging informatics, DICOM-RT expertise, and system integration, we propose to test our hypothesis using a brain tumor case model that a DICOM-RT electronic patient record (ePR) system can improve clinical workflow efficiency for treatment and management of patients. This RT ePR system integrated with clinical images and RT data can impact the RT department in a similar fashion as PACS has already successfully done for Radiology. As a first step, the specific treatment case of patients with brain tumors specifically patients treated with the Cyberknife system will be the initial proof of concept for the research design, implementation, evaluation, and clinical relevance.
Pursley, Randall H.; Salem, Ghadi; Devasahayam, Nallathamby; Subramanian, Sankaran; Koscielniak, Janusz; Krishna, Murali C.; Pohida, Thomas J.
2006-01-01
The integration of modern data acquisition and digital signal processing (DSP) technologies with Fourier transform electron paramagnetic resonance (FT-EPR) imaging at radiofrequencies (RF) is described. The FT-EPR system operates at a Larmor frequency (Lf) of 300 MHz to facilitate in vivo studies. This relatively low frequency Lf, in conjunction with our ~10 MHz signal bandwidth, enables the use of direct free induction decay time-locked subsampling (TLSS). This particular technique provides advantages by eliminating the traditional analog intermediate frequency downconversion stage along with the corresponding noise sources. TLSS also results in manageable sample rates that facilitate the design of DSP-based data acquisition and image processing platforms. More specifically, we utilize a high-speed field programmable gate array (FPGA) and a DSP processor to perform advanced real-time signal and image processing. The migration to a DSP-based configuration offers the benefits of improved EPR system performance, as well as increased adaptability to various EPR system configurations (i.e., software configurable systems instead of hardware reconfigurations). The required modifications to the FT-EPR system design are described, with focus on the addition of DSP technologies including the application-specific hardware, software, and firmware developed for the FPGA and DSP processor. The first results of using real-time DSP technologies in conjunction with direct detection bandpass sampling to implement EPR imaging at RF frequencies are presented. PMID:16243552
Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming
Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy
2013-01-01
Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148
NASA Astrophysics Data System (ADS)
Ting, Samuel T.
The research presented in this work seeks to develop, validate, and deploy practical techniques for improving diagnosis of cardiovascular disease. In the philosophy of biomedical engineering, we seek to identify an existing medical problem having significant societal and economic effects and address this problem using engineering approaches. Cardiovascular disease is the leading cause of mortality in the United States, accounting for more deaths than any other major cause of death in every year since 1900 with the exception of the year 1918. Cardiovascular disease is estimated to account for almost one-third of all deaths in the United States, with more than 2150 deaths each day, or roughly 1 death every 40 seconds. In the past several decades, a growing array of imaging modalities have proven useful in aiding the diagnosis and evaluation of cardiovascular disease, including computed tomography, single photon emission computed tomography, and echocardiography. In particular, cardiac magnetic resonance imaging is an excellent diagnostic tool that can provide within a single exam a high quality evaluation of cardiac function, blood flow, perfusion, viability, and edema without the use of ionizing radiation. The scope of this work focuses on the application of engineering techniques for improving imaging using cardiac magnetic resonance with the goal of improving the utility of this powerful imaging modality. Dynamic cine imaging, or the capturing of movies of a single slice or volume within the heart or great vessel region, is used in nearly every cardiac magnetic resonance imaging exam, and adequate evaluation of cardiac function and morphology for diagnosis and evaluation of cardiovascular disease depends heavily on both the spatial and temporal resolution as well as the image quality of the reconstruction cine images. This work focuses primarily on image reconstruction techniques utilized in cine imaging; however, the techniques discussed are also relevant to other dynamic and static imaging techniques based on cardiac magnetic resonance. Conventional segmented techniques for cardiac cine imaging require breath-holding as well as regular cardiac rhythm, and can be time-consuming to acquire. Inadequate breath-holding or irregular cardiac rhythm can result in completely non-diagnostic images, limiting the utility of these techniques in a significant patient population. Real-time single-shot cardiac cine imaging enables free-breathing acquisition with significantly shortened imaging time and promises to significantly improve the utility of cine imaging for diagnosis and evaluation of cardiovascular disease. However, utility of real-time cine images depends heavily on the successful reconstruction of final cine images from undersampled data. Successful reconstruction of images from more highly undersampled data results directly in images exhibiting finer spatial and temporal resolution provided that image quality is sufficient. This work focuses primarily on the development, validation, and deployment of practical techniques for enabling the reconstruction of real-time cardiac cine images at the spatial and temporal resolutions and image quality needed for diagnostic utility. Particular emphasis is placed on the development of reconstruction approaches resulting in with short computation times that can be used in the clinical environment. Specifically, the use of compressed sensing signal recovery techniques is considered; such techniques show great promise in allowing successful reconstruction of highly undersampled data. The scope of this work concerns two primary topics related to signal recovery using compressed sensing: (1) long reconstruction times of these techniques, and (2) improved sparsity models for signal recovery from more highly undersampled data. Both of these aspects are relevant to the practical application of compressed sensing techniques in the context of improving image reconstruction of real-time cardiac cine images. First, algorithmic and implementational approaches are proposed for reducing the computational time for a compressed sensing reconstruction framework. Specific optimization algorithms based on the fast iterative/shrinkage algorithm (FISTA) are applied in the context of real-time cine image reconstruction to achieve efficient per-iteration computation time. Implementation within a code framework utilizing commercially available graphics processing units (GPUs) allows for practical and efficient implementation directly within the clinical environment. Second, patch-based sparsity models are proposed to enable compressed sensing signal recovery from highly undersampled data. Numerical studies demonstrate that this approach can help improve image quality at higher undersampling ratios, enabling real-time cine imaging at higher acceleration rates. In this work, it is shown that these techniques yield a holistic framework for achieving efficient reconstruction of real-time cine images with spatial and temporal resolution sufficient for use in the clinical environment. A thorough description of these techniques from both a theoretical and practical view is provided - both of which may be of interest to the reader in terms of future work.
Merged GLORIA sidescan and hydrosweep pseudo-sidescan: Processing and creation of digital mosaics
Bird, R.T.; Searle, R.C.; Paskevich, V.; Twichell, D.C.
1996-01-01
We have replaced the usual band of poor-quality data in the near-nadir region of our GLORIA long-range sidescan-sonar imagery with a shaded-relief image constructed from swath bathymetry data (collected simultaneously with GLORIA) which completely cover the nadir area. We have developed a technique to enhance these "pseudo-sidescan" images in order to mimic the neighbouring GLORIA backscatter intensities. As a result, the enhanced images greatly facilitate the geologic interpretation of the adjacent GLORIA data, and geologic features evident in the GLORIA data may be correlated with greater confidence across track. Features interpreted from the pseudo-sidescan may be extrapolated from the near-nadir region out into the GLORIA range where they may not have been recognized otherwise, and therefore the pseudo-sidescan can be used to ground-truth GLORIA interpretations. Creation of digital sidescan mosaics utilized an approach not previously used for GLORIA data. Pixels were correctly placed in cartographic space and the time required to complete a final mosaic was significantly reduced. Computer software for digital mapping and mosaic creation is incorporated into the newly-developed Woods Hole Image Processing System (WHIPS) which can process both low- and high-frequency sidescan, and can interchange data with the Mini Image Processing System (MIPS) most commonly used for GLORIA processing. These techniques are tested by creating digital mosaics of merged GLORIA sidescan and Hydrosweep pseudo-sidescan data from the vicinity of the Juan Fernandez microplate along the East Pacific Rise (EPR).
Multifocus watermarking approach based on discrete cosine transform.
Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila
2016-05-01
Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long
2012-01-01
The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve the speed and orientation efficiency of target identification effectively, and validate the feasibility of this method primarily.
A PDA study management tool (SMT) utilizing wireless broadband and full DICOM viewing capability
NASA Astrophysics Data System (ADS)
Documet, Jorge; Liu, Brent; Zhou, Zheng; Huang, H. K.; Documet, Luis
2007-03-01
During the last 4 years IPI (Image Processing and Informatics) Laboratory has been developing a web-based Study Management Tool (SMT) application that allows Radiologists, Film librarians and PACS-related (Picture Archiving and Communication System) users to dynamically and remotely perform Query/Retrieve operations in a PACS network. The users utilizing a regular PDA (Personal Digital Assistant) can remotely query a PACS archive to distribute any study to an existing DICOM (Digital Imaging and Communications in Medicine) node. This application which has proven to be convenient to manage the Study Workflow [1, 2] has been extended to include a DICOM viewing capability in the PDA. With this new feature, users can take a quick view of DICOM images providing them mobility and convenience at the same time. In addition, we are extending this application to Metropolitan-Area Wireless Broadband Networks. This feature requires Smart Phones that are capable of working as a PDA and have access to Broadband Wireless Services. With the extended application to wireless broadband technology and the preview of DICOM images, the Study Management Tool becomes an even more powerful tool for clinical workflow management.
Log-Gabor Weber descriptor for face recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Sang, Nong; Gao, Changxin
2015-09-01
The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.
Jiansen Li; Jianqi Sun; Ying Song; Yanran Xu; Jun Zhao
2014-01-01
An effective way to improve the data acquisition speed of magnetic resonance imaging (MRI) is using under-sampled k-space data, and dictionary learning method can be used to maintain the reconstruction quality. Three-dimensional dictionary trains the atoms in dictionary in the form of blocks, which can utilize the spatial correlation among slices. Dual-dictionary learning method includes a low-resolution dictionary and a high-resolution dictionary, for sparse coding and image updating respectively. However, the amount of data is huge for three-dimensional reconstruction, especially when the number of slices is large. Thus, the procedure is time-consuming. In this paper, we first utilize the NVIDIA Corporation's compute unified device architecture (CUDA) programming model to design the parallel algorithms on graphics processing unit (GPU) to accelerate the reconstruction procedure. The main optimizations operate in the dictionary learning algorithm and the image updating part, such as the orthogonal matching pursuit (OMP) algorithm and the k-singular value decomposition (K-SVD) algorithm. Then we develop another version of CUDA code with algorithmic optimization. Experimental results show that more than 324 times of speedup is achieved compared with the CPU-only codes when the number of MRI slices is 24.
An Imaging System for Automated Characteristic Length Measurement of Debrisat Fragments
NASA Technical Reports Server (NTRS)
Moraguez, Mathew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Sorge, Marlon; Cowardin, Heather; Opiela, John; Krisko, Paula H.
2015-01-01
The debris fragments generated by DebriSat's hypervelocity impact test are currently being processed and characterized through an effort of NASA and USAF. The debris characteristics will be used to update satellite breakup models. In particular, the physical dimensions of the debris fragments must be measured to provide characteristic lengths for use in these models. Calipers and commercial 3D scanners were considered as measurement options, but an automated imaging system was ultimately developed to measure debris fragments. By automating the entire process, the measurement results are made repeatable and the human factor associated with calipers and 3D scanning is eliminated. Unlike using calipers to measure, the imaging system obtains non-contact measurements to avoid damaging delicate fragments. Furthermore, this fully automated measurement system minimizes fragment handling, which reduces the potential for fragment damage during the characterization process. In addition, the imaging system reduces the time required to determine the characteristic length of the debris fragment. In this way, the imaging system can measure the tens of thousands of DebriSat fragments at a rate of about six minutes per fragment, compared to hours per fragment in NASA's current 3D scanning measurement approach. The imaging system utilizes a space carving algorithm to generate a 3D point cloud of the article being measured and a custom developed algorithm then extracts the characteristic length from the point cloud. This paper describes the measurement process, results, challenges, and future work of the imaging system used for automated characteristic length measurement of DebriSat fragments.
Deep learning with convolutional neural network in radiology.
Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu
2018-04-01
Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.
Photocontrollable Fluorescent Proteins for Superresolution Imaging
Shcherbakova, Daria M.; Sengupta, Prabuddha; Lippincott-Schwartz, Jennifer; Verkhusha, Vladislav V.
2014-01-01
Superresolution fluorescence microscopy permits the study of biological processes at scales small enough to visualize fine subcellular structures that are unresolvable by traditional diffraction-limited light microscopy. Many superresolution techniques, including those applicable to live cell imaging, utilize genetically encoded photocontrollable fluorescent proteins. The fluorescence of these proteins can be controlled by light of specific wavelengths. In this review, we discuss the biochemical and photophysical properties of photocontrollable fluorescent proteins that are relevant to their use in superresolution microscopy. We then describe the recently developed photoactivatable, photoswitchable, and reversibly photoswitchable fluorescent proteins, and we detail their particular usefulness in single-molecule localization–based and nonlinear ensemble–based superresolution techniques. Finally, we discuss recent applications of photocontrollable proteins in superresolution imaging, as well as how these applications help to clarify properties of intracellular structures and processes that are relevant to cell and developmental biology, neuroscience, cancer biology and biomedicine. PMID:24895855
Near-infrared image formation and processing for the extraction of hand veins
NASA Astrophysics Data System (ADS)
Bouzida, Nabila; Hakim Bendada, Abdel; Maldague, Xavier P.
2010-10-01
The main objective of this work is to extract the hand vein network using a non-invasive technique in the near-infrared region (NIR). The visualization of the veins is based on a relevant feature of the blood in relation with certain wavelengths of the electromagnetic spectrum. In the present paper, we first introduce the image formation in the NIR spectral band. Then, the acquisition system will be presented as well as the method used for the image processing in order to extract the vein signature. Extractions of this pattern on the finger, on the wrist and on the dorsal hand are achieved after exposing the hand to an optical stimulation by reflection or transmission of light. We present meaningful results of the extracted vein pattern demonstrating the utility of the method for a clinical application like the diagnosis of vein disease, of primitive varicose vein and also for applications in vein biometrics.
Shrink-wrapped isosurface from cross sectional images
Choi, Y. K.; Hahn, J. K.
2010-01-01
Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361
Next Generation UAS Based Spectral Systems for Environmental Monitoring
NASA Technical Reports Server (NTRS)
Campbell, P.; Townsend, P.; Mandl, D.; Kingdon, C.; Ly, V.; Sohlberg, R.; Corp, L.; Cappelaere, P.; Frye, S.; Handy, M.;
2015-01-01
This presentation provides information on the development of a small Unmanned Aerial System(UAS) with a low power, high performance Intelligent Payload Module (IPM) and a hyperspectral imager to enable intelligent gathering of science grade vegetation data over agricultural fields at about 150 ft. The IPM performs real time data processing over the image data and then enables the navigation system to move the UAS to locations where measurements are optimal for science. This is important because the small UAS typically has about 30 minutes of battery power and therefore over large agricultural fields, resource utilization efficiency is important. The key innovation is the shrinking of the IPM and the cross communication with the navigation software to allow the data processing to interact with desired way points while using Field Programmable Gate Arrays to enable high performance on large data volumes produced by the hyperspectral imager.
Hemorrhage Detection and Segmentation in Traumatic Pelvic Injuries
Davuluri, Pavani; Wu, Jie; Tang, Yang; Cockrell, Charles H.; Ward, Kevin R.; Najarian, Kayvan; Hargraves, Rosalyn H.
2012-01-01
Automated hemorrhage detection and segmentation in traumatic pelvic injuries is vital for fast and accurate treatment decision making. Hemorrhage is the main cause of deaths in patients within first 24 hours after the injury. It is very time consuming for physicians to analyze all Computed Tomography (CT) images manually. As time is crucial in emergence medicine, analyzing medical images manually delays the decision-making process. Automated hemorrhage detection and segmentation can significantly help physicians to analyze these images and make fast and accurate decisions. Hemorrhage segmentation is a crucial step in the accurate diagnosis and treatment decision-making process. This paper presents a novel rule-based hemorrhage segmentation technique that utilizes pelvic anatomical information to segment hemorrhage accurately. An evaluation measure is used to quantify the accuracy of hemorrhage segmentation. The results show that the proposed method is able to segment hemorrhage very well, and the results are promising. PMID:22919433
Optical to optical interface device
NASA Technical Reports Server (NTRS)
Oliver, D. S.; Vohl, P.; Nisenson, P.
1972-01-01
The development, fabrication, and testing of a preliminary model of an optical-to-optical (noncoherent-to-coherent) interface device for use in coherent optical parallel processing systems are described. The developed device demonstrates a capability for accepting as an input a scene illuminated by a noncoherent radiation source and providing as an output a coherent light beam spatially modulated to represent the original noncoherent scene. The converter device developed under this contract employs a Pockels readout optical modulator (PROM). This is a photosensitive electro-optic element which can sense and electrostatically store optical images. The stored images can be simultaneously or subsequently readout optically by utilizing the electrostatic storage pattern to control an electro-optic light modulating property of the PROM. The readout process is parallel as no scanning mechanism is required. The PROM provides the functions of optical image sensing, modulation, and storage in a single active material.
NASA Astrophysics Data System (ADS)
Verstraete, Hans R. G. W.; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel J.; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Sarunic, Marinko V.; Verhaegen, Michel; Jian, Yifan
2017-02-01
Optical Coherence Tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. A limitation of the performance and utilization of the OCT systems has been the lateral resolution. Through the combination of wavefront sensorless adaptive optics with dual variable optical elements, we present a compact lens based OCT system that is capable of imaging the photoreceptor mosaic. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient eyes, and a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators for aberration correction to obtain near diffraction limited imaging at the retina. A parallel processing computational platform permitted real-time image acquisition and display. The Data-based Online Nonlinear Extremum seeker (DONE) algorithm was used for real time optimization of the wavefront sensorless adaptive optics OCT, and the performance was compared with a coordinate search algorithm. Cross sectional images of the retinal layers and en face images of the cone photoreceptor mosaic acquired in vivo from research volunteers before and after WSAO optimization are presented. Applying the DONE algorithm in vivo for wavefront sensorless AO-OCT demonstrates that the DONE algorithm succeeds in drastically improving the signal while achieving a computational time of 1 ms per iteration, making it applicable for high speed real time applications.
Radiation Protection of the Child from Diagnostic Imaging.
Leung, Rebecca S
2015-01-01
In recent years due to the technological advances in imaging techniques, which have undoubtedly improved diagnostic accuracy and resulted in improved patient care, the utilization of ionizing radiation in diagnostic imaging has significantly increased. Computed tomography is the major contributor to the radiation burden, but fluoroscopy continues to be a mainstay in paediatric radiology. The rise in the use of ionizing radiation is of particular concern with regard to the paediatric population, as they are up to 10 times more sensitive to the effects of radiation than adults, due to their increased tissue radiosensitivity, increased cumulative lifetime radiation dose and longer lifetime in which to manifest the effects. This article will review the estimated radiation risk to the child from diagnostic imaging and summarise the various methods through which both the paediatrician and radiologist can practice the ALARA (As Low As Reasonably Achievable) principle, which underpins the safe practice of radiology. Emphasis is on the justification for an examination, i.e. weighing of benefits versus radiation risk, on the appropriate utilization of other, non-ionizing imaging modalities such as ultrasound and magnetic resonance imaging, and on optimisation of a clinically indicated examination. It is essential that the paediatrician and radiologist work together in this decision making process for the mutual benefit of the patient. The appropriate practical application of ALARA in the workplace is crucial to the radiation safety of our paediatric patients.
Laser scanning endoscope for diagnostic medicine
NASA Astrophysics Data System (ADS)
Ouimette, Donald R.; Nudelman, Sol; Spackman, Thomas; Zaccheo, Scott
1990-07-01
A new type of endoscope is being developed which utilizes an optical raster scanning system for imaging through an endoscope. The optical raster scanner utilizes a high speed, multifaceted, rotating polygon mirror system for horizontal deflection, and a slower speed galvanometer driven mirror as the vertical deflection system. When used in combination, the optical raster scanner traces out a raster similar to an electron beam raster used in television systems. This flying spot of light can then be detected by various types of photosensitive detectors to generate a video image of the surface or scene being illuminated by the scanning beam. The optical raster scanner has been coupled to an endoscope. The raster is projected down the endoscope, thereby illuminating the object to be imaged at the distal end of the endoscope. Elemental photodetectors are placed at the distal or proximal end of the endoscope to detect the reflected illumination from the flying spot of light. This time sequenced signal is captured by an image processor for display and processing. This technique offers the possibility for very small diameter endoscopes since illumination channel requirements are eliminated. Using various lasers, very specific spectral selectivity can be achieved to optimum contrast of specific lesions of interest. Using several laser lines, or a white light source, with detectors of specific spectral response, multiple spectrally selected images can be acquired simultaneously. The potential for co-linear therapy delivery while imaging is also possible.
Compound image segmentation of published biomedical figures.
Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit
2018-04-01
Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.
Fast semivariogram computation using FPGA architectures
NASA Astrophysics Data System (ADS)
Lagadapati, Yamuna; Shirvaikar, Mukul; Dong, Xuanliang
2015-02-01
The semivariogram is a statistical measure of the spatial distribution of data and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. The semivariogram is a plot of semivariances for different lag distances between pixels. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O(n2). Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz, but they can perform tens of thousands of calculations per clock cycle while operating in the low range of power. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. The design consists of several modules dedicated to the constituent computational tasks. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. Anisotropic semivariogram implementation is anticipated to be an extension of the current architecture, ostensibly based on refinements to the current modules. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from MRI scans are utilized for the experiments. Computational speedup is measured with respect to Matlab implementation on a personal computer with an Intel i7 multi-core processor. Preliminary simulation results indicate that a significant advantage in speed can be attained by the architectures, making the algorithm viable for implementation in medical devices
Automated x-ray/light field congruence using the LINAC EPID panel.
Polak, Wojciech; O'Doherty, Jim; Jones, Matt
2013-03-01
X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be easily performed for all four cardinal gantry angles which can be difficult when using radiographic film. Therefore, the authors propose it can be used as an alternative to the radiographic film method allowing decommissioning of the film processor.
Gaddis, L.R.; Kirk, R.L.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Barrett, J.; Becker, K.; Decker, T.; Blue, J.; Cook, D.; Eliason, E.; Hare, T.; Howington-Kraus, E.; Isbell, C.; Lee, E.M.; Redding, B.; Sucharski, R.; Sucharski, T.; Smith, P.H.; Britt, D.T.
1999-01-01
The Imager for Mars Pathfinder (IMP) acquired more than 16,000 images and provided panoramic views of the surface of Mars at the Mars Pathfinder landing site in Ares Vallis. This paper describes the stereoscopic, multispectral IMP imaging sequences and focuses on their use for digital mapping of the landing site and for deriving cartographic products to support science applications of these data. Two-dimensional cartographic processing of IMP data, as performed via techniques and specialized software developed for ISIS (the U.S.Geological Survey image processing software package), is emphasized. Cartographic processing of IMP data includes ingestion, radiometric correction, establishment of geometric control, coregistration of multiple bands, reprojection, and mosaicking. Photogrammetric processing, an integral part of this cartographic work which utilizes the three-dimensional character of the IMP data, supplements standard processing with geometric control and topographic information [Kirk et al., this issue]. Both cartographic and photogrammetric processing are required for producing seamless image mosaics and for coregistering the multispectral IMP data. Final, controlled IMP cartographic products include spectral cubes, panoramic (360?? azimuthal coverage) and planimetric (top view) maps, and topographic data, to be archived on four CD-ROM volumes. Uncontrolled and semicontrolled versions of these products were used to support geologic characterization of the landing site during the nominal and extended missions. Controlled products have allowed determination of the topography of the landing site and environs out to ???60 m, and these data have been used to unravel the history of large- and small-scale geologic processes which shaped the observed landing site. We conclude by summarizing several lessons learned from cartographic processing of IMP data. Copyright 1999 by the American Geophysical Union.
Development of image processing techniques for applications in flow visualization and analysis
NASA Technical Reports Server (NTRS)
Disimile, Peter J.; Shoe, Bridget; Toy, Norman; Savory, Eric; Tahouri, Bahman
1991-01-01
A comparison between two flow visualization studies of an axi-symmetric circular jet issuing into still fluid, using two different experimental techniques, is described. In the first case laser induced fluorescence is used to visualize the flow structure, whilst smoke is utilized in the second. Quantitative information was obtained from these visualized flow regimes using two different digital imaging systems. Results are presented of the rate at which the jet expands in the downstream direction and these compare favorably with the more established data.
An infrared modular panoramic imaging objective
NASA Astrophysics Data System (ADS)
Palmer, Troy A.; Alexay, Christopher C.
2004-08-01
We describe the optical and mechanical design of an athermal infrared objective lens with an afocal anamorphic adapter. The lens presented consists of two modules: an athermal 25mm F/2.3 mid-wave IR objective lens and an optional panoramic adapter. The adapter utilizes anamorphic lenses to create unique image control. The result of which enables an independent horizontal wide field of view, while preserving the original narrow vertical field. We have designed, fabricated and tested two such lenses. A summary of the assembly and testing process is also presented.
NASA Technical Reports Server (NTRS)
Olson, W. S.; Yeh, C. L.; Weinman, J. A.; Chin, R. T.
1985-01-01
A restoration of the 37, 21, 18, 10.7, and 6.6 GHz satellite imagery from the scanning multichannel microwave radiometer (SMMR) aboard Nimbus-7 to 22.2 km resolution is attempted using a deconvolution method based upon nonlinear programming. The images are deconvolved with and without the aid of prescribed constraints, which force the processed image to abide by partial a priori knowledge of the high-resolution result. The restored microwave imagery may be utilized to examined the distribution of precipitating liquid water in marine rain systems.
Utilization of the Space Vision System as an Augmented Reality System For Mission Operations
NASA Technical Reports Server (NTRS)
Maida, James C.; Bowen, Charles
2003-01-01
Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to flight hardware capable of utilizing this technology. This is the basis for this proposed Space Human Factors Engineering project, the determination of the display symbology within the performance limits of the Space Vision System that will objectively improve human performance. This utilization of existing flight hardware will greatly reduce the costs of implementation for flight. Besides being used onboard shuttle and space station and as a ground-based system for mission operational support, it also has great potential for science and medical training and diagnostics, remote learning, team learning, video/media conferencing, and educational outreach.
Video streaming technologies using ActiveX and LabVIEW
NASA Astrophysics Data System (ADS)
Panoiu, M.; Rat, C. L.; Panoiu, C.
2015-06-01
The goal of this paper is to present the possibilities of remote image processing through data exchange between two programming technologies: LabVIEW and ActiveX. ActiveX refers to the process of controlling one program from another via ActiveX component; where one program acts as the client and the other as the server. LabVIEW can be either client or server. Both programs (client and server) exist independent of each other but are able to share information. The client communicates with the ActiveX objects that the server opens to allow the sharing of information [7]. In the case of video streaming [1] [2], most ActiveX controls can only display the data, being incapable of transforming it into a data type that LabVIEW can process. This becomes problematic when the system is used for remote image processing. The LabVIEW environment itself provides little if any possibilities for video streaming, and the methods it does offer are usually not high performance, but it possesses high performance toolkits and modules specialized in image processing, making it ideal for processing the captured data. Therefore, we chose to use existing software, specialized in video streaming along with LabVIEW and to capture the data provided by them, for further use, within LabVIEW. The software we studied (the ActiveX controls of a series of media players that utilize streaming technology) provide high quality data and a very small transmission delay, ensuring the reliability of the results of the image processing.
NASA Astrophysics Data System (ADS)
Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao
A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.
Freud, Erez; Avidan, Galia; Ganel, Tzvi
2015-02-01
Holistic processing, the decoding of a stimulus as a unified whole, is a basic characteristic of object perception. Recent research using Garner's speeded classification task has shown that this processing style is utilized even for impossible objects that contain an inherent spatial ambiguity. In particular, similar Garner interference effects were found for possible and impossible objects, indicating similar holistic processing styles for the two object categories. In the present study, we further investigated the perceptual mechanisms that mediate such holistic representation of impossible objects. We relied on the notion that, whereas information embedded in the high-spatial-frequency (HSF) content supports fine-detailed processing of object features, the information conveyed by low spatial frequencies (LSF) is more crucial for the emergence of a holistic shape representation. To test the effects of image frequency on the holistic processing of impossible objects, participants performed the Garner speeded classification task on images of possible and impossible cubes filtered for their LSF and HSF information. For images containing only LSF, similar interference effects were observed for possible and impossible objects, indicating that the two object categories were processed in a holistic manner. In contrast, for the HSF images, Garner interference was obtained only for possible, but not for impossible objects. Importantly, we provided evidence to show that this effect could not be attributed to a lack of sensitivity to object possibility in the LSF images. Particularly, even for full-spectrum images, Garner interference was still observed for both possible and impossible objects. Additionally, performance in an object classification task revealed high sensitivity to object possibility, even for LSF images. Taken together, these findings suggest that the visual system can tolerate the spatial ambiguity typical to impossible objects by relying on information embedded in LSF, whereas HSF information may underlie the visual system's susceptibility to distortions in objects' spatial layouts.
Object segmentation controls image reconstruction from natural scenes
2017-01-01
The structure of the physical world projects images onto our eyes. However, those images are often poorly representative of environmental structure: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. The challenge for the visual cortex is to sort these two types of features according to their utility in ultimately reconstructing percepts and interpreting the constituents of the scene. We describe a novel paradigm that enabled us to selectively evaluate the relative role played by these two feature classes in signal reconstruction from corrupted images. Our measurements demonstrate that this process is quickly dominated by the inferred structure of the environment, and only minimally controlled by variations of raw image content. The inferential mechanism is spatially global and its impact on early visual cortex is fast. Furthermore, it retunes local visual processing for more efficient feature extraction without altering the intrinsic transduction noise. The basic properties of this process can be partially captured by a combination of small-scale circuit models and large-scale network architectures. Taken together, our results challenge compartmentalized notions of bottom-up/top-down perception and suggest instead that these two modes are best viewed as an integrated perceptual mechanism. PMID:28827801
Fast ray-tracing of human eye optics on Graphics Processing Units.
Wei, Qi; Patkar, Saket; Pai, Dinesh K
2014-05-01
We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Fast segmentation of satellite images using SLIC, WebGL and Google Earth Engine
NASA Astrophysics Data System (ADS)
Donchyts, Gennadii; Baart, Fedor; Gorelick, Noel; Eisemann, Elmar; van de Giesen, Nick
2017-04-01
Google Earth Engine (GEE) is a parallel geospatial processing platform, which harmonizes access to petabytes of freely available satellite images. It provides a very rich API, allowing development of dedicated algorithms to extract useful geospatial information from these images. At the same time, modern GPUs provide thousands of computing cores, which are mostly not utilized in this context. In the last years, WebGL became a popular and well-supported API, allowing fast image processing directly in web browsers. In this work, we will evaluate the applicability of WebGL to enable fast segmentation of satellite images. A new implementation of a Simple Linear Iterative Clustering (SLIC) algorithm using GPU shaders will be presented. SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It adapts a k-means clustering approach to generate superpixels efficiently. While this approach will be hard to scale, due to a significant amount of data to be transferred to the client, it should significantly improve exploratory possibilities and simplify development of dedicated algorithms for geoscience applications. Our prototype implementation will be used to improve surface water detection of the reservoirs using multispectral satellite imagery.
Stewart, Ethan L; Hagerty, Christina H; Mikaberidze, Alexey; Mundt, Christopher C; Zhong, Ziming; McDonald, Bruce A
2016-07-01
Zymoseptoria tritici causes Septoria tritici blotch (STB) on wheat. An improved method of quantifying STB symptoms was developed based on automated analysis of diseased leaf images made using a flatbed scanner. Naturally infected leaves (n = 949) sampled from fungicide-treated field plots comprising 39 wheat cultivars grown in Switzerland and 9 recombinant inbred lines (RIL) grown in Oregon were included in these analyses. Measures of quantitative resistance were percent leaf area covered by lesions, pycnidia size and gray value, and pycnidia density per leaf and lesion. These measures were obtained automatically with a batch-processing macro utilizing the image-processing software ImageJ. All phenotypes in both locations showed a continuous distribution, as expected for a quantitative trait. The trait distributions at both sites were largely overlapping even though the field and host environments were quite different. Cultivars and RILs could be assigned to two or more statistically different groups for each measured phenotype. Traditional visual assessments of field resistance were highly correlated with quantitative resistance measures based on image analysis for the Oregon RILs. These results show that automated image analysis provides a promising tool for assessing quantitative resistance to Z. tritici under field conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nabavizadeh, Nima, E-mail: nabaviza@ohsu.edu; Elliott, David A.; Chen, Yiyi
Purpose: To survey image guided radiation therapy (IGRT) practice patterns, as well as IGRT's impact on clinical workflow and planning treatment volumes (PTVs). Methods and Materials: A sample of 5979 treatment site–specific surveys was e-mailed to the membership of the American Society for Radiation Oncology (ASTRO), with questions pertaining to IGRT modality/frequency, PTV expansions, method of image verification, and perceived utility/value of IGRT. On-line image verification was defined as images obtained and reviewed by the physician before treatment. Off-line image verification was defined as images obtained before treatment and then reviewed by the physician before the next treatment. Results: Of 601 evaluablemore » responses, 95% reported IGRT capabilities other than portal imaging. The majority (92%) used volumetric imaging (cone-beam CT [CBCT] or megavoltage CT), with volumetric imaging being the most commonly used modality for all sites except breast. The majority of respondents obtained daily CBCTs for head and neck intensity modulated radiation therapy (IMRT), lung 3-dimensional conformal radiation therapy or IMRT, anus or pelvis IMRT, prostate IMRT, and prostatic fossa IMRT. For all sites, on-line image verification was most frequently performed during the first few fractions only. No association was seen between IGRT frequency or CBCT utilization and clinical treatment volume to PTV expansions. Of the 208 academic radiation oncologists who reported working with residents, only 41% reported trainee involvement in IGRT verification processes. Conclusion: Consensus guidelines, further evidence-based approaches for PTV margin selection, and greater resident involvement are needed for standardized use of IGRT practices.« less
Nabavizadeh, Nima; Elliott, David A; Chen, Yiyi; Kusano, Aaron S; Mitin, Timur; Thomas, Charles R; Holland, John M
2016-03-15
To survey image guided radiation therapy (IGRT) practice patterns, as well as IGRT's impact on clinical workflow and planning treatment volumes (PTVs). A sample of 5979 treatment site-specific surveys was e-mailed to the membership of the American Society for Radiation Oncology (ASTRO), with questions pertaining to IGRT modality/frequency, PTV expansions, method of image verification, and perceived utility/value of IGRT. On-line image verification was defined as images obtained and reviewed by the physician before treatment. Off-line image verification was defined as images obtained before treatment and then reviewed by the physician before the next treatment. Of 601 evaluable responses, 95% reported IGRT capabilities other than portal imaging. The majority (92%) used volumetric imaging (cone-beam CT [CBCT] or megavoltage CT), with volumetric imaging being the most commonly used modality for all sites except breast. The majority of respondents obtained daily CBCTs for head and neck intensity modulated radiation therapy (IMRT), lung 3-dimensional conformal radiation therapy or IMRT, anus or pelvis IMRT, prostate IMRT, and prostatic fossa IMRT. For all sites, on-line image verification was most frequently performed during the first few fractions only. No association was seen between IGRT frequency or CBCT utilization and clinical treatment volume to PTV expansions. Of the 208 academic radiation oncologists who reported working with residents, only 41% reported trainee involvement in IGRT verification processes. Consensus guidelines, further evidence-based approaches for PTV margin selection, and greater resident involvement are needed for standardized use of IGRT practices. Copyright © 2016 Elsevier Inc. All rights reserved.
Oriented modulation for watermarking in direct binary search halftone images.
Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der
2012-09-01
In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.
Brain Imaging in Alzheimer Disease
Johnson, Keith A.; Fox, Nick C.; Sperling, Reisa A.; Klunk, William E.
2012-01-01
Imaging has played a variety of roles in the study of Alzheimer disease (AD) over the past four decades. Initially, computed tomography (CT) and then magnetic resonance imaging (MRI) were used diagnostically to rule out other causes of dementia. More recently, a variety of imaging modalities including structural and functional MRI and positron emission tomography (PET) studies of cerebral metabolism with fluoro-deoxy-d-glucose (FDG) and amyloid tracers such as Pittsburgh Compound-B (PiB) have shown characteristic changes in the brains of patients with AD, and in prodromal and even presymptomatic states that can help rule-in the AD pathophysiological process. No one imaging modality can serve all purposes as each have unique strengths and weaknesses. These modalities and their particular utilities are discussed in this article. The challenge for the future will be to combine imaging biomarkers to most efficiently facilitate diagnosis, disease staging, and, most importantly, development of effective disease-modifying therapies. PMID:22474610
Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder
NASA Astrophysics Data System (ADS)
August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian
2016-03-01
Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian
2016-03-23
Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.
Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System
Hosseini, Monireh Sheikh; Zekri, Maryam
2012-01-01
Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated. PMID:23493054
[Usefulness of volume rendering stereo-movie in neurosurgical craniotomies].
Fukunaga, Tateya; Mokudai, Toshihiko; Fukuoka, Masaaki; Maeda, Tomonori; Yamamoto, Kouji; Yamanaka, Kozue; Minakuchi, Kiyomi; Miyake, Hirohisa; Moriki, Akihito; Uchida, Yasufumi
2007-12-20
In recent years, the advancements in MR technology combined with the development of the multi-channel coil have resulted in substantially shortened inspection times. In addition, rapid improvement in functional performance in the workstation has produced a more simplified imaging-making process. Consequently, graphical images of intra-cranial lesions can be easily created. For example, the use of three-dimensional spoiled gradient echo (3D-SPGR) volume rendering (VR) after injection of a contrast medium is applied clinically as a preoperative reference image. Recently, improvements in 3D-SPGR VR high-resolution have enabled accurate surface images of the brain to be obtained. We used stereo-imaging created by weighted maximum intensity projection (Weighted MIP) to determine the skin incision line. Furthermore, the stereo imaging technique utilizing 3D-SPGR VR was actually used in cases presented here. The techniques we report here seemed to be very useful in the pre-operative simulation of neurosurgical craniotomy.
NASA Astrophysics Data System (ADS)
Awumah, A.; Mahanti, P.; Robinson, M. S.
2017-12-01
Image fusion is often used in Earth-based remote sensing applications to merge spatial details from a high-resolution panchromatic (Pan) image with the color information from a lower-resolution multi-spectral (MS) image, resulting in a high-resolution multi-spectral image (HRMS). Previously, the performance of six well-known image fusion methods were compared using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images (1). Results showed the Intensity-Hue-Saturation (IHS) method provided the best spatial performance, but deteriorated the spectral content. In general, there was a trade-off between spatial enhancement and spectral fidelity from the fusion process; the more spatial details from the Pan fused with the MS image, the more spectrally distorted the final HRMS. In this work, we control the amount of spatial details fused (from the LROC NAC images to WAC images) using a controlled IHS method (2), to investigate the spatial variation in spectral distortion on fresh crater ejecta. In the controlled IHS method (2), the percentage of the Pan component merged with the MS is varied. The percent of spatial detail from the Pan used is determined by a variable whose value may be varied between 1 (no Pan utilized) to infinity (entire Pan utilized). An HRMS color composite image (red=415nm, green=321/415nm, blue=321/360nm (3)) was used to assess performance (via visual inspection and metric-based evaluations) at each tested value of the control parameter (1 to 10—after which spectral distortion saturates—in 0.01 increments) within three regions: crater interiors, ejecta blankets, and the background material surrounding the craters. Increasing the control parameter introduced increased spatial sharpness and spectral distortion in all regions, but to varying degrees. Crater interiors suffered the most color distortion, while ejecta experienced less color distortion. The controlled IHS method is therefore desirable for resolution-enhancement of fresh crater ejecta; larger values of the control parameter may be used to sharpen MS images of ejecta patterns but with less impact to color distortion than in the uncontrolled IHS fusion process. References: (1) Prasun et. al (2016) ISPRS. (2) Choi, Myungjin (2006) IEEE. (3) Denevi et. al (2014) JGR.
Web-based document image processing
NASA Astrophysics Data System (ADS)
Walker, Frank L.; Thoma, George R.
1999-12-01
Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.
Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc
NASA Astrophysics Data System (ADS)
Becker, Peter; Plesea, Lucian; Maurer, Thomas
2016-06-01
The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.
NASA Astrophysics Data System (ADS)
De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-05-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Technical Reports Server (NTRS)
DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Technical Reports Server (NTRS)
De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Astrophysics Data System (ADS)
Lee, I.-Chieh
Shoreline delineation and shoreline change detection are expensive processes in data source acquisition and manual shoreline delineation. These costs confine the frequency and interval of shoreline mapping periods. In this dissertation, a new shoreline delineation approach was developed targeting on lowering the data source cost and reducing human labor. To lower the cost of data sources, we used the public domain LiDAR data sets and satellite images to delineate shorelines without the requirement of data sets being acquired simultaneously, which is a new concept in this field. To reduce the labor cost, we made improvements in classifying LiDAR points and satellite images. Analyzing shadow relations with topography to improve the satellite image classification performance is also a brand-new concept. The extracted shoreline of the proposed approach could achieve an accuracy of 1.495 m RMSE, or 4.452m at the 95% confidence level. Consequently, the proposed approach could successfully lower the cost and shorten the processing time, in other words, to increase the shoreline mapping frequency with a reasonable accuracy. However, the extracted shoreline may not compete with the shoreline extracted by aerial photogrammetric procedures in the aspect of accuracy. Hence, this is a trade-off between cost and accuracy. This approach consists of three phases, first, a shoreline extraction procedure based mainly on LiDAR point cloud data with multispectral information from satellite images. Second, an object oriented shoreline extraction procedure to delineate shoreline solely from satellite images; in this case WorldView-2 images were used. Third, a shoreline integration procedure combining these two shorelines based on actual shoreline changes and physical terrain properties. The actual data source cost would only be from the acquisition of satellite images. On the other hand, only two processes needed human attention. First, the shoreline within harbor areas needed to be manually connected, for its length was less than 3% of the total shoreline length in our dataset. Secondly, the parameters for satellite image classification needed to be manually determined. The need for manpower was significantly less compared to the ground surveying or aerial photogrammetry. The first phase of shoreline extraction was to utilize Normalized Difference Vegetation Index (NDVI), Mean-Shift segmentation on the coordinate (X, Y, Z), and attributes (multispectral bands from satellite images) of the LiDAR points to classify each LiDAR point into land or water surface. Boundary of the land points were then traced to create the shoreline. The second phase of shoreline extraction solely from satellite images utilized spectrum, NDVI, and shadow analysis to classify the satellite images into classes. These classes were then refined by mean-shift segmentation on the panchromatic band. By tracing the boundary of the water surface, the shoreline can be created. Since these two shorelines may represent different shoreline instances in time, evaluating the changes of shoreline was the first to be done. Then an independent scenario analysis and a procedure are performed for the shoreline of each of the three conditions: in the process of erosion, in the process of accession, and remaining the same. With these three conditions, we could analysis the actual terrain type and correct the classification errors to obtain a more accurate shoreline. Meanwhile, methods of evaluating the quality of shorelines had also been discussed. The experiment showed that there were three indicators could best represent the quality of the shoreline. These indicators were: (1) shoreline accuracy, (2) land area difference between extracted shoreline and ground truth shoreline, and (3) bias factor from shoreline quality metrics.
An Automatic Phase-Change Detection Technique for Colloidal Hard Sphere Suspensions
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth; Rogers, Richard B.
2005-01-01
Colloidal suspensions of monodisperse spheres are used as physical models of thermodynamic phase transitions and as precursors to photonic band gap materials. However, current image analysis techniques are not able to distinguish between densely packed phases within conventional microscope images, which are mainly characterized by degrees of randomness or order with similar grayscale value properties. Current techniques for identifying the phase boundaries involve manually identifying the phase transitions, which is very tedious and time consuming. We have developed an intelligent machine vision technique that automatically identifies colloidal phase boundaries. The algorithm utilizes intelligent image processing techniques that accurately identify and track phase changes vertically or horizontally for a sequence of colloidal hard sphere suspension images. This technique is readily adaptable to any imaging application where regions of interest are distinguished from the background by differing patterns of motion over time.
Invited Article: Digital beam-forming imaging riometer systems
NASA Astrophysics Data System (ADS)
Honary, Farideh; Marple, Steve R.; Barratt, Keith; Chapman, Peter; Grill, Martin; Nielsen, Erling
2011-03-01
The design and operation of a new generation of digital imaging riometer systems developed by Lancaster University are presented. In the heart of the digital imaging riometer is a field-programmable gate array (FPGA), which is used for the digital signal processing and digital beam forming, completely replacing the analog Butler matrices which have been used in previous designs. The reconfigurable nature of the FPGA has been exploited to produce tools for remote system testing and diagnosis which have proven extremely useful for operation in remote locations such as the Arctic and Antarctic. Different FPGA programs enable different instrument configurations, including a 4 × 4 antenna filled array (producing 4 × 4 beams), an 8 × 8 antenna filled array (producing 7 × 7 beams), and a Mills cross system utilizing 63 antennas producing 556 usable beams. The concept of using a Mills cross antenna array for riometry has been successfully demonstrated for the first time. The digital beam forming has been validated by comparing the received signal power from cosmic radio sources with results predicted from the theoretical beam radiation pattern. The performances of four digital imaging riometer systems are compared against each other and a traditional imaging riometer utilizing analog Butler matrices. The comparison shows that digital imaging riometer systems, with independent receivers for each antenna, can obtain much better measurement precision for filled arrays or much higher spatial resolution for the Mills cross configuration when compared to existing imaging riometer systems.
MO-B-BRC-01: Introduction [Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prisciandaro, J.
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
MO-B-BRC-04: MRI-Based Prostate HDR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mourtada, F.
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less