NASA Astrophysics Data System (ADS)
Zhao, Libo; Xia, Yong; Hebibul, Rahman; Wang, Jiuhong; Zhou, Xiangyang; Hu, Yingjie; Li, Zhikang; Luo, Guoxi; Zhao, Yulong; Jiang, Zhuangde
2018-03-01
This paper presents an experimental study using image processing to investigate width and width uniformity of sub-micrometer polyethylene oxide (PEO) lines fabricated by near-filed electrospinning (NFES) technique. An adaptive thresholding method was developed to determine the optimal gray values to accurately extract profiles of printed lines from original optical images. And it was proved with good feasibility. The mechanism of the proposed thresholding method was believed to take advantage of statistic property and get rid of halo induced errors. Triangular method and relative standard deviation (RSD) were introduced to calculate line width and width uniformity, respectively. Based on these image processing methods, the effects of process parameters including substrate speed (v), applied voltage (U), nozzle-to-collector distance (H), and syringe pump flow rate (Q) on width and width uniformity of printed lines were discussed. The research results are helpful to promote the NFES technique for fabricating high resolution micro and sub-micro lines and also helpful to optical image processing at sub-micro level.
Hyperspectral imaging for food processing automation
NASA Astrophysics Data System (ADS)
Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.
2002-11-01
This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.
Digital video system for on-line portal verification
NASA Astrophysics Data System (ADS)
Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott
1990-07-01
A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.
Real-time biscuit tile image segmentation method based on edge detection.
Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter
2018-05-01
In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
On-line monitoring of fluid bed granulation by photometric imaging.
Soppela, Ira; Antikainen, Osmo; Sandler, Niklas; Yliruusi, Jouko
2014-11-01
This paper introduces and discusses a photometric surface imaging approach for on-line monitoring of fluid bed granulation. Five granule batches consisting of paracetamol and varying amounts of lactose and microcrystalline cellulose were manufactured with an instrumented fluid bed granulator. Photometric images and NIR spectra were continuously captured on-line and particle size information was extracted from them. Also key process parameters were recorded. The images provided direct real-time information on the growth, attrition and packing behaviour of the batches. Moreover, decreasing image brightness in the drying phase was found to indicate granule drying. The changes observed in the image data were also linked to the moisture and temperature profiles of the processes. Combined with complementary process analytical tools, photometric imaging opens up possibilities for improved real-time evaluation fluid bed granulation. Furthermore, images can give valuable insight into the behaviour of excipients or formulations during product development. Copyright © 2014 Elsevier B.V. All rights reserved.
Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite
NASA Astrophysics Data System (ADS)
Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi
2018-05-01
LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.
An image overall complexity evaluation method based on LSD line detection
NASA Astrophysics Data System (ADS)
Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo
2017-04-01
In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.
Comparison of line shortening assessed by aerial image and wafer measurements
NASA Astrophysics Data System (ADS)
Ziegler, Wolfram; Pforr, Rainer; Thiele, Joerg; Maurer, Wilhelm
1997-02-01
Increasing number of patterns per area and decreasing linewidth demand enhancement technologies for optical lithography. OPC, the correction of systematic non-linearity in the pattern transfer process by correction of design data is one possibility to tighten process control and to increase the lifetime of existing lithographic equipment. The two most prominent proximity effects to be corrected by OPC are CD variation and line shortening. Line shortening measured on a wafer is up to 2 times larger than full resist simulation results. Therefore, the influence of mask geometry to line shortening is a key item to parameterize lithography. The following paper discusses the effect of adding small serifs to line ends with 0.25 micrometer ground-rule design. For reticles produced on an ALTA 3000 with standard wet etch process, the corner rounding on them mask can be reduced by adding serifs of a certain size. The corner rounding was measured and the effect on line shortening on the wafer is determined. This was investigated by resist measurements on wafer, aerial image plus resist simulation and aerial image measurements on the AIMS microscope.
A cost-effective line-based light-balancing technique using adaptive processing.
Hsia, Shih-Chang; Chen, Ming-Huei; Chen, Yu-Min
2006-09-01
The camera imaging system has been widely used; however, the displaying image appears to have an unequal light distribution. This paper presents novel light-balancing techniques to compensate uneven illumination based on adaptive signal processing. For text image processing, first, we estimate the background level and then process each pixel with nonuniform gain. This algorithm can balance the light distribution while keeping a high contrast in the image. For graph image processing, the adaptive section control using piecewise nonlinear gain is proposed to equalize the histogram. Simulations show that the performance of light balance is better than the other methods. Moreover, we employ line-based processing to efficiently reduce the memory requirement and the computational cost to make it applicable in real-time systems.
Mehle, Andraž; Kitak, Domen; Podrekar, Gregor; Likar, Boštjan; Tomaževič, Dejan
2018-05-09
Agglomeration of pellets in fluidized bed coating processes is an undesirable phenomenon that affects the yield and quality of the product. In scope of PAT guidance, we present a system that utilizes visual imaging for in-line monitoring of the agglomeration degree. Seven pilot-scale Wurster coating processes were executed under various process conditions, providing a wide spectrum of process outcomes. Images of pellets were acquired during the coating processes in a contactless manner through an observation window of the coating apparatus. Efficient image analysis methods were developed for automatic recognition of discrete pellets and agglomerates in the acquired images. In-line obtained agglomeration degree trends revealed the agglomeration dynamics in distinct phases of the coating processes. We compared the in-line estimated agglomeration degree in the end point of each process to the results obtained by the off-line sieve analysis reference method. A strong positive correlation was obtained (coefficient of determination R 2 =0.99), confirming the feasibility of the approach. The in-line estimated agglomeration degree enables early detection of agglomeration and provides means for timely interventions to retain it in an acceptable range. Copyright © 2018 Elsevier B.V. All rights reserved.
In-Line Monitoring of a Pharmaceutical Pan Coating Process by Optical Coherence Tomography.
Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Buchsbaum, Andreas; Pescod, Russel; Baele, Thomas; Khinast, Johannes G
2015-08-01
This work demonstrates a new in-line measurement technique for monitoring the coating growth of randomly moving tablets in a pan coating process. In-line quality control is performed by an optical coherence tomography (OCT) sensor allowing nondestructive and contact-free acquisition of cross-section images of film coatings in real time. The coating thickness can be determined directly from these OCT images and no chemometric calibration models are required for quantification. Coating thickness measurements are extracted from the images by a fully automated algorithm. Results of the in-line measurements are validated using off-line OCT images, thickness calculations from tablet dimension measurements, and weight gain measurements. Validation measurements are performed on sample tablets periodically removed from the process during production. Reproducibility of the results is demonstrated by three batches produced under the same process conditions. OCT enables a multiple direct measurement of the coating thickness on individual tablets rather than providing the average coating thickness of a large number of tablets. This gives substantially more information about the coating quality, that is, intra- and intertablet coating variability, than standard quality control methods. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Bjorgan, Asgeir; Randeberg, Lise Lyngsnes
2015-01-01
Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717
Extraction of line properties based on direction fields.
Kutka, R; Stier, S
1996-01-01
The authors present a new set of algorithms for segmenting lines, mainly blood vessels in X-ray images, and extracting properties such as their intensities, diameters, and center lines. The authors developed a tracking algorithm that checks rules taking the properties of vessels into account. The tools even detect veins, arteries, or catheters of two pixels in diameter and with poor contrast. Compared with other algorithms, such as the Canny line detector or anisotropic diffusion, the authors extract a smoother and connected vessel tree without artifacts in the image background. As the tools depend on common intermediate results, they are very fast when used together. The authors' results will support the 3-D reconstruction of the vessel tree from stereoscopic projections. Moreover, the authors make use of their line intensity measure for enhancing and improving the visibility of vessels in 3-D X-ray images. The processed images are intended to support radiologists in diagnosis, radiation therapy planning, and surgical planning. Radiologists verified the improved quality of the processed images and the enhanced visibility of relevant details, particularly fine blood vessels.
Image Mosaic Method Based on SIFT Features of Line Segment
Zhu, Jun; Ren, Mingwu
2014-01-01
This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. PMID:24511326
Crop Row Detection in Maize Fields Inspired on the Human Visual Perception
Romeo, J.; Pajares, G.; Montalvo, M.; Guerrero, J. M.; Guijarro, M.; Ribeiro, A.
2012-01-01
This paper proposes a new method, oriented to image real-time processing, for identifying crop rows in maize fields in the images. The vision system is designed to be installed onboard a mobile agricultural vehicle, that is, submitted to gyros, vibrations, and undesired movements. The images are captured under image perspective, being affected by the above undesired effects. The image processing consists of two main processes: image segmentation and crop row detection. The first one applies a threshold to separate green plants or pixels (crops and weeds) from the rest (soil, stones, and others). It is based on a fuzzy clustering process, which allows obtaining the threshold to be applied during the normal operation process. The crop row detection applies a method based on image perspective projection that searches for maximum accumulation of segmented green pixels along straight alignments. They determine the expected crop lines in the images. The method is robust enough to work under the above-mentioned undesired effects. It is favorably compared against the well-tested Hough transformation for line detection. PMID:22623899
A hybrid algorithm for the segmentation of books in libraries
NASA Astrophysics Data System (ADS)
Hu, Zilong; Tang, Jinshan; Lei, Liang
2016-05-01
This paper proposes an algorithm for book segmentation based on bookshelves images. The algorithm can be separated into three parts. The first part is pre-processing, aiming at eliminating or decreasing the effect of image noise and illumination conditions. The second part is near-horizontal line detection based on Canny edge detector, and separating a bookshelves image into multiple sub-images so that each sub-image contains an individual shelf. The last part is book segmentation. In each shelf image, near-vertical line is detected, and obtained lines are used for book segmentation. The proposed algorithm was tested with the bookshelf images taken from OPIE library in MTU, and the experimental results demonstrate good performance.
Wei, Ning; You, Jia; Friehs, Karl; Flaschel, Erwin; Nattkemper, Tim Wilhelm
2007-08-15
Fermentation industries would benefit from on-line monitoring of important parameters describing cell growth such as cell density and viability during fermentation processes. For this purpose, an in situ probe has been developed, which utilizes a dark field illumination unit to obtain high contrast images with an integrated CCD camera. To test the probe, brewer's yeast Saccharomyces cerevisiae is chosen as the target microorganism. Images of the yeast cells in the bioreactors are captured, processed, and analyzed automatically by means of mechatronics, image processing, and machine learning. Two support vector machine based classifiers are used for separating cells from background, and for distinguishing live from dead cells afterwards. The evaluation of the in situ experiments showed strong correlation between results obtained by the probe and those by widely accepted standard methods. Thus, the in situ probe has been proved to be a feasible device for on-line monitoring of both cell density and viability with high accuracy and stability. (c) 2007 Wiley Periodicals, Inc.
On-line 3-dimensional confocal imaging in vivo.
Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M
2000-09-01
In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.
NASA Astrophysics Data System (ADS)
Tomczak, Kamil; Jakubowski, Jacek; Fiołek, Przemysław
2017-06-01
Crack width measurement is an important element of research on the progress of self-healing cement composites. Due to the nature of this research, the method of measuring the width of cracks and their changes over time must meet specific requirements. The article presents a novel method of measuring crack width based on images from a scanner with an optical resolution of 6400 dpi, subject to initial image processing in the ImageJ development environment and further processing and analysis of results. After registering a series of images of the cracks at different times using SIFT conversion (Scale-Invariant Feature Transform), a dense network of line segments is created in all images, intersecting the cracks perpendicular to the local axes. Along these line segments, brightness profiles are extracted, which are the basis for determination of crack width. The distribution and rotation of the line of intersection in a regular layout, automation of transformations, management of images and profiles of brightness, and data analysis to determine the width of cracks and their changes over time are made automatically by own code in the ImageJ and VBA environment. The article describes the method, tests on its properties, sources of measurement uncertainty. It also presents an example of application of the method in research on autogenous self-healing of concrete, specifically the ability to reduce a sample crack width and its full closure within 28 days of the self-healing process.
Data processing device test apparatus and method therefor
Wilcox, Richard Jacob; Mulig, Jason D.; Eppes, David; Bruce, Michael R.; Bruce, Victoria J.; Ring, Rosalinda M.; Cole, Jr., Edward I.; Tangyunyong, Paiboon; Hawkins, Charles F.; Louie, Arnold Y.
2003-04-08
A method and apparatus mechanism for testing data processing devices are implemented. The test mechanism isolates critical paths by correlating a scanning microscope image with a selected speed path failure. A trigger signal having a preselected value is generated at the start of each pattern vector. The sweep of the scanning microscope is controlled by a computer, which also receives and processes the image signals returned from the microscope. The value of the trigger signal is correlated with a set of pattern lines being driven on the DUT. The trigger is either asserted or negated depending the detection of a pattern line failure and the particular line that failed. In response to the detection of the particular speed path failure being characterized, and the trigger signal, the control computer overlays a mask on the image of the device under test (DUT). The overlaid image provides a visual correlation of the failure with the structural elements of the DUT at the level of resolution of the microscope itself.
On an image reconstruction method for ECT
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
How concept images affect students' interpretations of Newton's method
NASA Astrophysics Data System (ADS)
Engelke Infante, Nicole; Murphy, Kristen; Glenn, Celeste; Sealey, Vicki
2018-07-01
Knowing when students have the prerequisite knowledge to be able to read and understand a mathematical text is a perennial concern for instructors. Using text describing Newton's method and Vinner's notion of concept image, we exemplify how prerequisite knowledge influences understanding. Through clinical interviews with first-semester calculus students, we determined how evoked concept images of tangent lines and roots contributed to students' interpretation and application of Newton's method. Results show that some students' concept images of root and tangent line developed throughout the interview process, and most students were able to adequately interpret the text on Newton's method. However, students with insufficient concept images of tangent line and students who were unwilling or unable to modify their concept images of tangent line after reading the text were not successful in interpreting Newton's method.
Application on-line imagery for photogrammetry comparison of natural hazards events
NASA Astrophysics Data System (ADS)
Voumard, Jérémie; Jaboyedoff, Michel; Derron, Marc-Henri
2015-04-01
The airborne (ALS) and terrestrial laser scanner (TLS) technologies are well known and actually one of the most common technics to obtain 3D terrain models. However those technologies are expensive and logistically demanding. Another way to obtain DEM without those inconveniences is photogrammetry, in particular the structure from motion (SfM) technic that allows high quality 3D model extraction from common digital camera images without need of a expensive material. If the usual way to get images for SfM 3D modelling is to take pictures on-site, on-line imagery offer the possibility to get images from many roads and other places. The most common on-line street view resource is Google Street View. Since April 2014, this service proposes a back-in-time function on a certain number of locations. Google Street View images are composed from many pictures taken with a set of panoramic cameras mounted on a platform like a car roof. Those images are affected by strong deformations, which are not recommended for photogrammetry. At first sight, using street view images to make photogrammetry may bring some processing problems. The aim of this project is to study the possibility to made SfM 3D model from Google Street View images with open source processing software (Visual SFM) and low-cost software (Agisoft). The main interest of this method is to evaluate at low cost changes without terrain visit. Study areas are landslides (such those of Séchilienne in France) and cliffs near or far away from roads. Human-made terrain changes like stone wall collapse by high rain precipitations near of Monaco are also studied. For each case, 50 to 200 pictures have been used. The mains conditions to obtain 3D model results are: to have a street view image of the area of interest. Some countries like USA or France are well documented. Other countries like Switzerland are only partially or not at all like Germany. The second constraint is to have two or more sets of images at different time. Third condition is to have enough quality images. Over- or underexposed images, bad meteorological conditions (fog, rain, etc.) or bad images resolution compromise the SfM process. In our case studies, distances from the road to the object of interest range from 1 to 500 m. First results show that SfM processing with on-line images is not obvious. Poor-resolution and deformed images with unknown camera features make the process often difficult and not predictive. The use of Agisoft software give bad results because of the abovementioned features while Visual SFM give interesting results for about two thirds of cases. It is also demonstrated that 3D photogrammetry is possible with on-line images under certain restrictive conditions. Under these conditions of images quality, this technique can then be used to estimate volumes changes.
System for line drawings interpretation
NASA Astrophysics Data System (ADS)
Boatto, L.; Consorti, Vincenzo; Del Buono, Monica; Eramo, Vincenzo; Esposito, Alessandra; Melcarne, F.; Meucci, Mario; Mosciatti, M.; Tucci, M.; Morelli, Arturo
1992-08-01
This paper describes an automatic system that extracts information from line drawings, in order to feed CAD or GIS systems. The line drawings that we analyze contain interconnected thin lines, dashed lines, text, and symbols. Characters and symbols may overlap with lines. Our approach is based on the properties of the run representation of a binary image that allow giving the image a graph structure. Using this graph structure, several algorithms have been designed to identify, directly in the raster image, straight segments, dashed lines, text, symbols, hatching lines, etc. Straight segments and dashed lines are converted into vectors, with high accuracy and good noise immunity. Characters and symbols are recognized by means of a recognizer, specifically developed for this application, designed to be insensitive to rotation and scaling. Subsequent processing steps include an `intelligent'' search through the graph in order to detect closed polygons, dashed lines, text strings, and other higher-level logical entities, followed by the identification of relationships (adjacency, inclusion, etc.) between them. Relationships are further translated into a formal description of the drawing. The output of the system can be used as input to a Geographic Information System package. The system is currently used by the Italian Land Register Authority to process cadastral maps.
Methods and apparatuses for detection of radiation with semiconductor image sensors
Cogliati, Joshua Joseph
2018-04-10
A semiconductor image sensor is repeatedly exposed to high-energy photons while a visible light obstructer is in place to block visible light from impinging on the sensor to generate a set of images from the exposures. A composite image is generated from the set of images with common noise substantially removed so the composite image includes image information corresponding to radiated pixels that absorbed at least some energy from the high-energy photons. The composite image is processed to determine a set of bright points in the composite image, each bright point being above a first threshold. The set of bright points is processed to identify lines with two or more bright points that include pixels therebetween that are above a second threshold and identify a presence of the high-energy particles responsive to a number of lines.
Seam tracking with adaptive image capture for fine-tuning of a high power laser welding process
NASA Astrophysics Data System (ADS)
Lahdenoja, Olli; Säntti, Tero; Laiho, Mika; Paasio, Ari; Poikonen, Jonne K.
2015-02-01
This paper presents the development of methods for real-time fine-tuning of a high power laser welding process of thick steel by using a compact smart camera system. When performing welding in butt-joint configuration, the laser beam's location needs to be adjusted exactly according to the seam line in order to allow the injected energy to be absorbed uniformly into both steel sheets. In this paper, on-line extraction of seam parameters is targeted by taking advantage of a combination of dynamic image intensity compression, image segmentation with a focal-plane processor ASIC, and Hough transform on an associated FPGA. Additional filtering of Hough line candidates based on temporal windowing is further applied to reduce unrealistic frame-to-frame tracking variations. The proposed methods are implemented in Matlab by using image data captured with adaptive integration time. The simulations are performed in a hardware oriented way to allow real-time implementation of the algorithms on the smart camera system.
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung
2013-05-01
This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.
Development of online lines-scan imaging system for chicken inspection and differentiation
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Chan, Diane E.; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.
2006-10-01
An online line-scan imaging system was developed for differentiation of wholesome and systemically diseased chickens. The hyperspectral imaging system used in this research can be directly converted to multispectral operation and would provide the ideal implementation of essential features for data-efficient high-speed multispectral classification algorithms. The imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph for line-scan images. The system scanned the surfaces of chicken carcasses on an eviscerating line at a poultry processing plant in December 2005. A method was created to recognize birds entering and exiting the field of view, and to locate a Region of Interest on the chicken images from which useful spectra were extracted for analysis. From analysis of the difference spectra between wholesome and systemically diseased chickens, four wavelengths of 468 nm, 501 nm, 582 nm and 629 nm were selected as key wavelengths for differentiation. The method of locating the Region of Interest will also have practical application in multispectral operation of the line-scan imaging system for online chicken inspection. This line-scan imaging system makes possible the implementation of multispectral inspection using the key wavelengths determined in this study with minimal software adaptations and without the need for cross-system calibration.
New-style defect inspection system of film
NASA Astrophysics Data System (ADS)
Liang, Yan; Liu, Wenyao; Liu, Ming; Lee, Ronggang
2002-09-01
An inspection system has been developed for on-line detection of film defects, which bases on combination of photoelectric imaging and digital image processing. The system runs in high speed of maximum 60m/min. Moving film is illuminated by LED array which emits even infrared (peak wavelength λp=940nm), and infrared images are obtained with a high quality and high speed CCD camera. The application software based on Visual C++6.0 under Windows processes images in real time by means of such algorithms as median filter, edge detection and projection, etc. The system is made up of four modules, which are introduced in detail in the paper. On-line experiment results shows that the inspection system can recognize defects precisely in high speed and run reliably in practical application.
MOPEX: a software package for astronomical image processing and visualization
NASA Astrophysics Data System (ADS)
Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley
2006-06-01
We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.
NASA Astrophysics Data System (ADS)
Kim, Moon S.; Cho, Byoung-Kwan; Yang, Chun-Chieh; Chao, Kaunglin; Lefcourt, Alan M.; Chen, Yud-Ren
2006-10-01
We have developed nondestructive opto-electronic imaging techniques for rapid assessment of safety and wholesomeness of foods. A recently developed fast hyperspectral line-scan imaging system integrated with a commercial apple-sorting machine was evaluated for rapid detection of animal feces matter on apples. Apples obtained from a local orchard were artificially contaminated with cow feces. For the online trial, hyperspectral images with 60 spectral channels, reflectance in the visible to near infrared regions and fluorescence emissions with UV-A excitation, were acquired from apples moving at a processing sorting-line speed of three apples per second. Reflectance and fluorescence imaging required a passive light source, and each method used independent continuous wave (CW) light sources. In this paper, integration of the hyperspectral imaging system with the commercial applesorting machine and preliminary results for detection of fecal contamination on apples, mainly based on the fluorescence method, are presented.
1989-08-01
Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17
Miller, John J.; Agena, W.F.; Lee, M.W.; Zihlman, F.N.; Grow, J.A.; Taylor, D.J.; Killgore, Michele; Oliver, H.L.
2000-01-01
This CD-ROM contains stacked, migrated, 2-Dimensional seismic reflection data and associated support information for 22 regional seismic lines (3,470 line-miles) recorded in the National Petroleum Reserve ? Alaska (NPRA) from 1974 through 1981. Together, these lines constitute about one-quarter of the seismic data collected as part of the Federal Government?s program to evaluate the petroleum potential of the Reserve. The regional lines, which form a grid covering the entire NPRA, were created by combining various individual lines recorded in different years using different recording parameters. These data were reprocessed by the USGS using modern, post-stack processing techniques, to create a data set suitable for interpretation on interactive seismic interpretation computer workstations. Reprocessing was done in support of ongoing petroleum resource studies by the USGS Energy Program. The CD-ROM contains the following files: 1) 22 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 22 lines in standard SEG-P1 format; 3) 22 small scale graphic images of each seismic line in Adobe Acrobat? PDF format; 4) a graphic image of the location map, generated from the navigation file, with hyperlinks to the graphic images of the seismic lines; 5) an ASCII text file with cross-reference information for relating the sequential trace numbers on each regional line to the line number and shotpoint number of the original component lines; and 6) an explanation of the processing used to create the final seismic sections (this document). The SEG-Y format seismic files and SEG-P1 format navigation file contain all the information necessary for loading the data onto a seismic interpretation workstation.
Radial line method for rear-view mirror distortion detection
NASA Astrophysics Data System (ADS)
Rahmah, Fitri; Kusumawardhani, Apriani; Setijono, Heru; Hatta, Agus M.; Irwansyah, .
2015-01-01
An image of the object can be distorted due to a defect in a mirror. A rear-view mirror is an important component for the vehicle safety. One of standard parameters of the rear-view mirror is a distortion factor. This paper presents a radial line method for distortion detection of the rear-view mirror. The rear-view mirror was tested for the distortion detection by using a system consisting of a webcam sensor and an image-processing unit. In the image-processing unit, the captured image from the webcam were pre-processed by using smoothing and sharpening techniques and then a radial line method was used to define the distortion factor. It was demonstrated successfully that the radial line method could be used to define the distortion factor. This detection system is useful to be implemented such as in Indonesian's automotive component industry while the manual inspection still be used.
Intershot Analysis of Flows in DIII-D
NASA Astrophysics Data System (ADS)
Meyer, W. H.; Allen, S. L.; Samuell, C. M.; Howard, J.
2016-10-01
Analysis of the DIII-D flow diagnostic data require demodulation of interference images, and inversion of the resultant line integrated emissivity and flow (phase) images. Four response matrices are pre-calculated: the emissivity line integral and the line integral of the scalar product of the lines-of-site with the orthogonal unit vectors of parallel flow. Equilibrium data determines the relative weight of the component matrices used in the final flow inversion matrix. Serial processing has been used for the lower divertor viewing flow camera 800x600 pixel image. The full cross section viewing camera will require parallel processing of the 2160x2560 pixel image. We will discuss using a Posix thread pool and a Tesla K40c GPU in the processing of this data. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.
NASA Astrophysics Data System (ADS)
Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie
2018-01-01
The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.
NASA Technical Reports Server (NTRS)
Poros, D. J.; Peterson, C. J.
1985-01-01
Methods for destriping TM images and results of the application of these methods to selected TM scenes with sensor and scan striping, which was not removed by the radiometric correction during the TM Archive Generation Phase in TIPS, are presented. These methods correct only for gain and offset differences between detectors over many image lines and do not consider within-line effects. The feasibility of implementing a destriping process online in TIPS is also described.
Image of a line is not shrunk but neglected. Absence of crossover in unilateral spatial neglect.
Ishiai, Sumio; Koyama, Yasumasa; Nakano, Naomi; Seki, Keiko; Nishida, Yoichiro; Hayashi, Kazuko
2004-01-01
Patients with left unilateral spatial neglect following right hemisphere lesions usually err rightward when bisecting a horizontal line. For very short lines (e.g. 25 mm), however, leftward errors or seemingly 'right' neglect is often observed. To explain this paradox of crossover in the direction of errors, rather complicated models have been introduced as to the distribution of attention. Neglect may be hypothesized to occur in representational process of a line or estimation of the midpoint on the formed image, or both. We devised a line image task using a computer display with a touch panel and approached the representational image of a line to be bisected. Three patients with typical left neglect were presented with a line and forced to see its whole extent with cueing to the left endpoint. After disappearance of the line, they pointed to the right endpoint, the left endpoint, or the subjective midpoint according to their representational image. The line image between the reproduced right and left endpoints was appropriately formed for the 200 mm lines. However, the images for the shorter 25 and 100 mm lines were longer than the physical lengths with overextension to the left side. These results proved the context effect that short lines may be perceived longer when they are presented in combination with longer lines. One of our patients had an extensive lesion that involved the frontal, temporal, and parietal lobes, and the other two had a lesion restricted to the posterior right hemisphere. The image for a fully perceived line may be represented far enough into left space even when left neglect occurs after a lesion that involves the right parietal lobe. The patients with neglect placed the subjective midpoint rightward from the centre of the stimulus line for the 100 and 200 mm lines and leftward for the 25 mm lines. This crossover of bisection errors disappeared when the displacement of the subjective midpoint was measured from the centre of the representational line image. Left neglect may occur consistently in estimation of the subjective midpoint on the representational image, which may be explained by a simple rightward bias of attentional distribution.
Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates.
Zhang, Hao; Li, Xianqi; Chen, Yunmei; Park, Jewook; Li, An-Ping; Zhang, X-G
2017-01-01
We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a "rubber band" model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data.
On-line object feature extraction for multispectral scene representation
NASA Technical Reports Server (NTRS)
Ghassemian, Hassan; Landgrebe, David
1988-01-01
A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.
Optoelectronic imaging of speckle using image processing method
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Wang, Pengfei
2018-01-01
A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.
In-line monitoring of pellet coating thickness growth by means of visual imaging.
Oman Kadunc, Nika; Sibanc, Rok; Dreu, Rok; Likar, Boštjan; Tomaževič, Dejan
2014-08-15
Coating thickness is the most important attribute of coated pharmaceutical pellets as it directly affects release profiles and stability of the drug. Quality control of the coating process of pharmaceutical pellets is thus of utmost importance for assuring the desired end product characteristics. A visual imaging technique is presented and examined as a process analytic technology (PAT) tool for noninvasive continuous in-line and real time monitoring of coating thickness of pharmaceutical pellets during the coating process. Images of pellets were acquired during the coating process through an observation window of a Wurster coating apparatus. Image analysis methods were developed for fast and accurate determination of pellets' coating thickness during a coating process. The accuracy of the results for pellet coating thickness growth obtained in real time was evaluated through comparison with an off-line reference method and a good agreement was found. Information about the inter-pellet coating uniformity was gained from further statistical analysis of the measured pellet size distributions. Accuracy and performance analysis of the proposed method showed that visual imaging is feasible as a PAT tool for in-line and real time monitoring of the coating process of pharmaceutical pellets. Copyright © 2014 Elsevier B.V. All rights reserved.
Engraving Print Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoelck, Daniel; Barbe, Joaquim
2008-04-15
A print is a mark, or drawing, made in or upon a plate, stone, woodblock or other material which is cover with ink and then is press usually into a paper reproducing the image on the paper. Engraving prints usually are image composed of a group of binary lines, specially those are made with relief and intaglio techniques. Varying the number and the orientation of lines, the drawing of the engraving print is conformed. For this reason we propose an application based on image processing methods to classify engraving prints.
Detection of Heating Processes in Coronal Loops by Soft X-ray Spectroscopy
NASA Astrophysics Data System (ADS)
Kawate, Tomoko; Narukage, Noriyuki; Ishikawa, Shin-nosuke; Imada, Shinsuke
2017-08-01
Imaging and Spectroscopic observations in the soft X-ray band will open a new window of the heating/acceleration/transport processes in the solar corona. The soft X-ray spectrum between 0.5 and 10 keV consists of the electron thermal free-free continuum and hot coronal lines such as O VIII, Fe XVII, Mg XI, Si XVII. Intensity of free-free continuum emission is not affected by the population of ions, whereas line intensities especially from highly ionized species have a sensitivity of the timescale of ionization/recombination processes. Thus, spectroscopic observations of both continuum and line intensities have a capability of diagnostics of heating/cooling timescales. We perform a 1D hydrodynamic simulation coupled with the time-dependent ionization, and calculate continuum and line intensities under different heat input conditions in a coronal loop. We also examine the differential emission measure of the coronal loop from the time-integrated soft x-ray spectra. As a result, line intensity shows a departure from the ionization equilibrium and shows different responses depending on the frequency of the heat input. Solar soft X-ray spectroscopic imager will be mounted in the sounding rocket experiment of the Focusing Optics X-ray Solar Imager (FOXSI). This observation will deepen our understanding of heating processes to solve the “coronal heating problem”.
Motion-Blurred Particle Image Restoration for On-Line Wear Monitoring
Peng, Yeping; Wu, Tonghai; Wang, Shuo; Kwok, Ngaiming; Peng, Zhongxiao
2015-01-01
On-line images of wear debris contain important information for real-time condition monitoring, and a dynamic imaging technique can eliminate particle overlaps commonly found in static images, for instance, acquired using ferrography. However, dynamic wear debris images captured in a running machine are unavoidably blurred because the particles in lubricant are in motion. Hence, it is difficult to acquire reliable images of wear debris with an adequate resolution for particle feature extraction. In order to obtain sharp wear particle images, an image processing approach is proposed. Blurred particles were firstly separated from the static background by utilizing a background subtraction method. Second, the point spread function was estimated using power cepstrum to determine the blur direction and length. Then, the Wiener filter algorithm was adopted to perform image restoration to improve the image quality. Finally, experiments were conducted with a large number of dynamic particle images to validate the effectiveness of the proposed method and the performance of the approach was also evaluated. This study provides a new practical approach to acquire clear images for on-line wear monitoring. PMID:25856328
MIA - A free and open source software for gray scale medical image analysis
2013-01-01
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed. PMID:24119305
MIA - A free and open source software for gray scale medical image analysis.
Wollny, Gert; Kellman, Peter; Ledesma-Carbayo, María-Jesus; Skinner, Matthew M; Hublin, Jean-Jaques; Hierl, Thomas
2013-10-11
Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large.Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers.One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development.Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don't provide an clear approach when one wants to shape a new command line tool from a prototype shell script. The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Development of an optical inspection platform for surface defect detection in touch panel glass
NASA Astrophysics Data System (ADS)
Chang, Ming; Chen, Bo-Cheng; Gabayno, Jacque Lynn; Chen, Ming-Fu
2016-04-01
An optical inspection platform combining parallel image processing with high resolution opto-mechanical module was developed for defect inspection of touch panel glass. Dark field images were acquired using a 12288-pixel line CCD camera with 3.5 µm per pixel resolution and 12 kHz line rate. Key features of the glass surface were analyzed by parallel image processing on combined CPU and GPU platforms. Defect inspection of touch panel glass, which provided 386 megapixel image data per sample, was completed in roughly 5 seconds. High detection rate of surface scratches on the touch panel glass was realized with minimum defects size of about 10 µm after inspection. The implementation of a custom illumination source significantly improved the scattering efficiency on the surface, therefore enhancing the contrast in the acquired images and overall performance of the inspection system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guildenbecher, Daniel Robert; Munz, Elise Dahnke; Farias, Paul Abraham
2015-12-01
Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-componentmore » velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guildenbecher, Daniel Robert; Munz, Elise Dahnke; Farias, Paul Abraham
2015-12-01
Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-componentmore » velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.« less
PAT-tools for process control in pharmaceutical film coating applications.
Knop, Klaus; Kleinebudde, Peter
2013-12-05
Recent development of analytical techniques to monitor the coating process of pharmaceutical solid dosage forms such as pellets and tablets are described. The progress from off- or at-line measurements to on- or in-line applications is shown for the spectroscopic methods near infrared (NIR) and Raman spectroscopy as well as for terahertz pulsed imaging (TPI) and image analysis. The common goal of all these methods is to control or at least to monitor the coating process and/or to estimate the coating end point through timely measurements. Copyright © 2013 Elsevier B.V. All rights reserved.
Self-localization for an autonomous mobile robot based on an omni-directional vision system
NASA Astrophysics Data System (ADS)
Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin
2013-12-01
In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.
Detection and display of acoustic window for guiding and training cardiac ultrasound users
NASA Astrophysics Data System (ADS)
Huang, Sheng-Wen; Radulescu, Emil; Wang, Shougang; Thiele, Karl; Prater, David; Maxwell, Douglas; Rafter, Patrick; Dupuy, Clement; Drysdale, Jeremy; Erkamp, Ramon
2014-03-01
Successful ultrasound data collection strongly relies on the skills of the operator. Among different scans, echocardiography is especially challenging as the heart is surrounded by ribs and lung tissue. Less experienced users might acquire compromised images because of suboptimal hand-eye coordination and less awareness of artifacts. Clearly, there is a need for a tool that can guide and train less experienced users to position the probe optimally. We propose to help users with hand-eye coordination by displaying lines overlaid on B-mode images. The lines indicate the edges of blockages (e.g., ribs) and are updated in real time according to movement of the probe relative to the blockages. They provide information about how probe positioning can be improved. To distinguish between blockage and acoustic window, we use coherence, an indicator of channel data similarity after applying focusing delays. Specialized beamforming was developed to estimate coherence. Image processing is applied to coherence maps to detect unblocked beams and the angle of the lines for display. We built a demonstrator based on a Philips iE33 scanner, from which beamsummed RF data and video output are transferred to a workstation for processing. The detected lines are overlaid on B-mode images and fed back to the scanner display to provide users real-time guidance. Using such information in addition to B-mode images, users will be able to quickly find a suitable acoustic window for optimal image quality, and improve their skill.
Processing Infrared Images For Fire Management Applications
NASA Astrophysics Data System (ADS)
Warren, John R.; Pratt, William K.
1981-12-01
The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.
NASA Astrophysics Data System (ADS)
Manoharan, Kodeeswari; Daniel, Philemon
2017-11-01
This paper presents a robust lane detection technique for roads on hilly terrain. The target of this paper is to utilize image processing strategies to recognize lane lines on structured mountain roads with the help of improved Hough transform. Vision-based approach is used as it performs well in a wide assortment of circumstances by abstracting valuable information contrasted with other sensors. The proposed strategy processes the live video stream, which is a progression of pictures, and concentrates on the position of lane markings in the wake of sending the edges through different channels and legitimate thresholding. The algorithm is tuned for Indian mountainous curved and paved roads. A technique of computation is utilized to discard the disturbing lines other than the credible lane lines and show just the required prevailing lane lines. This technique will consequently discover two lane lines that are nearest to the vehicle in a picture as right on time as could reasonably be expected. Various video sequences on hilly terrain are tested to verify the effectiveness of our method, and it has shown good performance with a detection accuracy of 91.89%.
High-performance camera module for fast quality inspection in industrial printing applications
NASA Astrophysics Data System (ADS)
Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.
Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates
Zhang, Hao; Li, Xianqi; Park, Jewook; Li, An-Ping
2017-01-01
We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a “rubber band” model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data. PMID:29362664
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-01-01
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-02-15
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.
Digital image processing techniques for the analysis of fuel sprays global pattern
NASA Astrophysics Data System (ADS)
Zakaria, Rami; Bryanston-Cross, Peter; Timmerman, Brenda
2017-12-01
We studied the fuel atomization process of two fuel injectors to be fitted in a new small rotary engine design. The aim was to improve the efficiency of the engine by optimizing the fuel injection system. Fuel sprays were visualised by an optical diagnostic system. Images of fuel sprays were produced under various testing conditions, by changing the line pressure, nozzle size, injection frequency, etc. The atomisers were a high-frequency microfluidic dispensing system and a standard low flow-rate fuel injector. A series of image processing procedures were developed in order to acquire information from the laser-scattering images. This paper presents the macroscopic characterisation of Jet fuel (JP8) sprays. We observed the droplet density distribution, tip velocity, and spray-cone angle against line-pressure and nozzle-size. The analysis was performed for low line-pressure (up to 10 bar) and short injection period (1-2 ms). Local velocity components were measured by applying particle image velocimetry (PIV) on double-exposure images. The discharge velocity was lower in the micro dispensing nozzle sprays and the tip penetration slowed down at higher rates compared to the gasoline injector. The PIV test confirmed that the gasoline injector produced sprays with higher velocity elements at the centre and the tip regions.
On-line measurement of diameter of hot-rolled steel tube
NASA Astrophysics Data System (ADS)
Zhu, Xueliang; Zhao, Huiying; Tian, Ailing; Li, Bin
2015-02-01
In order to design a online diameter measurement system for Hot-rolled seamless steel tube production line. On one hand, it can play a stimulate part in the domestic pipe measuring technique. On the other hand, it can also make our domestic hot rolled seamless steel tube enterprises gain a strong product competitiveness with low input. Through the analysis of various detection methods and techniques contrast, this paper choose a CCD camera-based online caliper system design. The system mainly includes the hardware measurement portion and the image processing section, combining with software control technology and image processing technology, which can complete online measurement of heat tube diameter. Taking into account the complexity of the actual job site situation, it can choose a relatively simple and reasonable layout. The image processing section mainly to solve the camera calibration and the application of a function in Matlab, to achieve the diameter size display directly through the algorithm to calculate the image. I build a simulation platform in the design last phase, successfully, collect images for processing, to prove the feasibility and rationality of the design and make error in less than 2%. The design successfully using photoelectric detection technology to solve real work problems
Mobile-based text recognition from water quality devices
NASA Astrophysics Data System (ADS)
Dhakal, Shanti; Rahnemoonfar, Maryam
2015-03-01
Measuring water quality of bays, estuaries, and gulfs is a complicated and time-consuming process. YSI Sonde is an instrument used to measure water quality parameters such as pH, temperature, salinity, and dissolved oxygen. This instrument is taken to water bodies in a boat trip and researchers note down different parameters displayed by the instrument's display monitor. In this project, a mobile application is developed for Android platform that allows a user to take a picture of the YSI Sonde monitor, extract text from the image and store it in a file on the phone. The image captured by the application is first processed to remove perspective distortion. Probabilistic Hough line transform is used to identify lines in the image and the corner of the image is then obtained by determining the intersection of the detected horizontal and vertical lines. The image is warped using the perspective transformation matrix, obtained from the corner points of the source image and the destination image, hence, removing the perspective distortion. Mathematical morphology operation, black-hat is used to correct the shading of the image. The image is binarized using Otsu's binarization technique and is then passed to the Optical Character Recognition (OCR) software for character recognition. The extracted information is stored in a file on the phone and can be retrieved later for analysis. The algorithm was tested on 60 different images of YSI Sonde with different perspective features and shading. Experimental results, in comparison to ground-truth results, demonstrate the effectiveness of the proposed method.
An acquisition system for CMOS imagers with a genuine 10 Gbit/s bandwidth
NASA Astrophysics Data System (ADS)
Guérin, C.; Mahroug, J.; Tromeur, W.; Houles, J.; Calabria, P.; Barbier, R.
2012-12-01
This paper presents a high data throughput acquisition system for pixel detector readout such as CMOS imagers. This CMOS acquisition board offers a genuine 10 Gbit/s bandwidth to the workstation and can provide an on-line and continuous high frame rate imaging capability. On-line processing can be implemented either on the Data Acquisition Board or on the multi-cores workstation depending on the complexity of the algorithms. The different parts composing the acquisition board have been designed to be used first with a single-photon detector called LUSIPHER (800×800 pixels), developed in our laboratory for scientific applications ranging from nano-photonics to adaptive optics. The architecture of the acquisition board is presented and the performances achieved by the produced boards are described. The future developments (hardware and software) concerning the on-line implementation of algorithms dedicated to single-photon imaging are tackled.
Electronic photography at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Holm, Jack M.
1994-01-01
The field of photography began a metamorphosis several years ago which promises to fundamentally change how images are captured, transmitted, and output. At this time the metamorphosis is still in the early stages, but already new processes, hardware, and software are allowing many individuals and organizations to explore the entry of imaging into the information revolution. Exploration at this time is prerequisite to leading expertise in the future, and a number of branches at LaRC have ventured into electronic and digital imaging. Their progress until recently has been limited by two factors: the lack of an integrated approach and the lack of an electronic photographic capability. The purpose of the research conducted was to address these two items. In some respects, the lack of electronic photographs has prevented application of an integrated imaging approach. Since everything could not be electronic, the tendency was to work with hard copy. Over the summer, the Photographics Section has set up an Electronic Photography Laboratory. This laboratory now has the capability to scan film images, process the images, and output the images in a variety of forms. Future plans also include electronic capture capability. The current forms of image processing available include sharpening, noise reduction, dust removal, tone correction, color balancing, image editing, cropping, electronic separations, and halftoning. Output choices include customer specified electronic file formats which can be output on magnetic or optical disks or over the network, 4400 line photographic quality prints and transparencies to 8.5 by 11 inches, and 8000 line film negatives and transparencies to 4 by 5 inches. The problem of integrated imaging involves a number of branches at LaRC including Visual Imaging, Research Printing and Publishing, Data Visualization and Animation, Advanced Computing, and various research groups. These units must work together to develop common approaches to image processing and archiving. The ultimate goal is to be able to search for images using an on-line database and image catalog. These images could then be retrieved over the network as needed, along with information on the acquisition and processing prior to storage. For this goal to be realized, a number of standard processing protocols must be developed to allow the classification of images into categories. Standard series of processing algorithms can then be applied to each category (although many of these may be adaptive between images). Since the archived image files would be standardized, it should also be possible to develop standard output processing protocols for a number of output devices. If LaRC continues the research effort begun this summer, it may be one of the first organizations to develop an integrated approach to imaging. As such, it could serve as a model for other organizations in government and the private sector.
Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines
Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim
2008-01-01
This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately. PMID:27879843
Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines.
Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim
2008-04-15
This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately.
Numerical image manipulation and display in solar astronomy
NASA Technical Reports Server (NTRS)
Levine, R. H.; Flagg, J. C.
1977-01-01
The paper describes the system configuration and data manipulation capabilities of a solar image display system which allows interactive analysis of visual images and on-line manipulation of digital data. Image processing features include smoothing or filtering of images stored in the display, contrast enhancement, and blinking or flickering images. A computer with a core memory of 28,672 words provides the capacity to perform complex calculations based on stored images, including computing histograms, selecting subsets of images for further analysis, combining portions of images to produce images with physical meaning, and constructing mathematical models of features in an image. Some of the processing modes are illustrated by some image sequences from solar observations.
GOES-R Advanced Base Line Imager Installation
2016-08-30
Team members prepare the Advanced Base Line Imager, the primary optical instrument, for installation on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
GOES-R Advanced Base Line Imager Installation
2016-08-30
Team members install the Advanced Base Line Imager, the primary optical instrument, on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
GOES-R Advanced Base Line Imager Installation
2016-08-30
The Advanced Base Line Imager, the primary optical instrument, has been installed on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
Segmenting overlapping nano-objects in atomic force microscopy image
NASA Astrophysics Data System (ADS)
Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko
2018-01-01
Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.
Acousto-optic laser projection systems for displaying TV information
NASA Astrophysics Data System (ADS)
Gulyaev, Yu V.; Kazaryan, M. A.; Mokrushin, Yu M.; Shakin, O. V.
2015-04-01
This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulators and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation.
Radar image processing for rock-type discrimination
NASA Technical Reports Server (NTRS)
Blom, R. G.; Daily, M.
1982-01-01
Image processing and enhancement techniques for improving the geologic utility of digital satellite radar images are reviewed. Preprocessing techniques such as mean and variance correction on a range or azimuth line by line basis to provide uniformly illuminated swaths, median value filtering for four-look imagery to eliminate speckle, and geometric rectification using a priori elevation data. Examples are presented of application of preprocessing methods to Seasat and Landsat data, and Seasat SAR imagery was coregistered with Landsat imagery to form composite scenes. A polynomial was developed to distort the radar picture to fit the Landsat image of a 90 x 90 km sq grid, using Landsat color ratios with Seasat intensities. Subsequent linear discrimination analysis was employed to discriminate rock types from known areas. Seasat additions to the Landsat data improved rock identification by 7%.
Hybrid adaptive radiotherapy with on-line MRI in cervix cancer IMRT.
Oh, Seungjong; Stewart, James; Moseley, Joanne; Kelly, Valerie; Lim, Karen; Xie, Jason; Fyles, Anthony; Brock, Kristy K; Lundin, Anna; Rehbinder, Henrik; Milosevic, Michael; Jaffray, David; Cho, Young-Bin
2014-02-01
Substantial organ motion and tumor shrinkage occur during radiotherapy for cervix cancer. IMRT planning studies have shown that the quality of radiation delivery is influenced by these anatomical changes, therefore the adaptation of treatment plans may be warranted. Image guidance with off-line replanning, i.e. hybrid-adaptation, is recognized as one of the most practical adaptation strategies. In this study, we investigated the effects of soft tissue image guidance using on-line MR while varying the frequency of off-line replanning on the adaptation of cervix IMRT. 33 cervical cancer patients underwent planning and weekly pelvic MRI scans during radiotherapy. 5 patients of 33 were identified in a previous retrospective adaptive planning study, in which the coverage of gross tumor volume/clinical target volume (GTV/CTV) was not acceptable given single off-line IMRT replan using a 3mm PTV margin with bone matching. These 5 patients and a randomly selected 10 patients from the remaining 28 patients, a total of 15 patients of 33, were considered in this study. Two matching methods for image guidance (bone to bone and soft tissue to dose matrix) and three frequencies of off-line replanning (none, single, and weekly) were simulated and compared with respect to target coverage (cervix, GTV, lower uterus, parametrium, upper vagina, tumor related CTV and elective lymph node CTV) and OAR sparing (bladder, bowel, rectum, and sigmoid). Cost (total process time) and benefit (target coverage) were analyzed for comparison. Hybrid adaptation (image guidance with off-line replanning) significantly enhanced target coverage for both 5 difficult and 10 standard cases. Concerning image guidance, bone matching was short of delivering enough doses for 5 difficult cases even with a weekly off-line replan. Soft tissue image guidance proved successful for all cases except one when single or more frequent replans were utilized in the difficult cases. Cost and benefit analysis preferred (soft tissue) image guidance over (frequent) off-line replanning. On-line MRI based image guidance (with combination of dose distribution) is a crucial element for a successful hybrid adaptive radiotherapy. Frequent off-line replanning adjuvantly enhances adaptation quality. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.
Kärkkäinen, Salme; Lantuéjoul, Christian
2007-10-01
We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.
Single-shot three-dimensional reconstruction based on structured light line pattern
NASA Astrophysics Data System (ADS)
Wang, ZhenZhou; Yang, YongMing
2018-07-01
Reconstruction of the object by single-shot is of great importance in many applications, in which the object is moving or its shape is non-rigid and changes irregularly. In this paper, we propose a single-shot structured light 3D imaging technique that calculates the phase map from the distorted line pattern. This technique makes use of the image processing techniques to segment and cluster the projected structured light line pattern from one single captured image. The coordinates of the clustered lines are extracted to form a low-resolution phase matrix which is then transformed to full-resolution phase map by spline interpolation. The 3D shape of the object is computed from the full-resolution phase map and the 2D camera coordinates. Experimental results show that the proposed method was able to reconstruct the three-dimensional shape of the object robustly from one single image.
GOES-R Advanced Base Line Imager Installation
2016-08-30
Team members assist as a crane lifts the Advanced Base Line Imager, the primary optical instrument, for installation on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
GOES-R Advanced Base Line Imager Installation
2016-08-30
Team members assist as a crane moves the Advanced Base Line Imager, the primary optical instruments, for installation on the Geostationary Operational Environmental Satellite (GOES-R) inside the Astrotech payload processing facility in Titusville, Florida near NASA’s Kennedy Space Center. GOES-R will be the first satellite in a series of next-generation NOAA GOES Satellites. The spacecraft is to launch aboard a United Launch Alliance Atlas V rocket in November.
Ringing Artefact Reduction By An Efficient Likelihood Improvement Method
NASA Astrophysics Data System (ADS)
Fuderer, Miha
1989-10-01
In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..
Coal Layer Identification using Electrical Resistivity Imaging Method in Sinjai Area South Sulawesi
NASA Astrophysics Data System (ADS)
Ilham Samanlangi, Andi
2018-03-01
The purpose of this research is to image subsurface resistivity for coal identification in Panaikang Village, Sinjai, South Sulawesi.Resistivity measurements were conducted in 3 lines of length 400 meters and 300 meter using resistivity imaging, dipole-dipole configuration. Resistivity data was processed using Res2DInv software to image resistivity variation and interpret lithology. The research results shown that coal resistivity in Line is about 70-200 Ωm, Line 2 is about 70-90 Ωm, and Line 3 is about 70-200 Ωm with average thickness about 10 meters and distributed to the east of research area.
A real-time MTFC algorithm of space remote-sensing camera based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Liting; Huang, Gang; Lin, Zhe
2018-01-01
A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.
Huang, Hui; Liu, Li; Ngadi, Michael O; Gariépy, Claude; Prasher, Shiv O
2014-01-01
Marbling is an important quality attribute of pork. Detection of pork marbling usually involves subjective scoring, which raises the efficiency costs to the processor. In this study, the ability to predict pork marbling using near-infrared (NIR) hyperspectral imaging (900-1700 nm) and the proper image processing techniques were studied. Near-infrared images were collected from pork after marbling evaluation according to current standard chart from the National Pork Producers Council. Image analysis techniques-Gabor filter, wide line detector, and spectral averaging-were applied to extract texture, line, and spectral features, respectively, from NIR images of pork. Samples were grouped into calibration and validation sets. Wavelength selection was performed on calibration set by stepwise regression procedure. Prediction models of pork marbling scores were built using multiple linear regressions based on derivatives of mean spectra and line features at key wavelengths. The results showed that the derivatives of both texture and spectral features produced good results, with correlation coefficients of validation of 0.90 and 0.86, respectively, using wavelengths of 961, 1186, and 1220 nm. The results revealed the great potential of the Gabor filter for analyzing NIR images of pork for the effective and efficient objective evaluation of pork marbling.
Automated Coronal Loop Identification using Digital Image Processing Techniques
NASA Astrophysics Data System (ADS)
Lee, J. K.; Gary, G. A.; Newman, T. S.
2003-05-01
The results of a Master's thesis study of computer algorithms for automatic extraction and identification (i.e., collectively, "detection") of optically-thin, 3-dimensional, (solar) coronal-loop center "lines" from extreme ultraviolet and X-ray 2-dimensional images will be presented. The center lines, which can be considered to be splines, are proxies of magnetic field lines. Detecting the loops is challenging because there are no unique shapes, the loop edges are often indistinct, and because photon and detector noise heavily influence the images. Three techniques for detecting the projected magnetic field lines have been considered and will be described in the presentation. The three techniques used are (i) linear feature recognition of local patterns (related to the inertia-tensor concept), (ii) parametric space inferences via the Hough transform, and (iii) topological adaptive contours (snakes) that constrain curvature and continuity. Since coronal loop topology is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information that has also been incorporated into the detection process. Synthesized images have been generated to benchmark the suitability of the three techniques, and the performance of the three techniques on both synthesized and solar images will be presented and numerically evaluated in the presentation. The process of automatic detection of coronal loops is important in the reconstruction of the coronal magnetic field where the derived magnetic field lines provide a boundary condition for magnetic models ( cf. , Gary (2001, Solar Phys., 203, 71) and Wiegelmann & Neukirch (2002, Solar Phys., 208, 233)). . This work was supported by NASA's Office of Space Science - Solar and Heliospheric Physics Supporting Research and Technology Program.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
NASA Astrophysics Data System (ADS)
Kirk, R. L.; Shepherd, M.; Sides, S. C.
2018-04-01
We use simulated images to demonstrate a novel technique for mitigating geometric distortions caused by platform motion ("jitter") as two-dimensional image sensors are exposed and read out line by line ("rolling shutter"). The results indicate that the Europa Imaging System (EIS) on NASA's Europa Clipper can likely meet its scientific goals requiring 0.1-pixel precision. We are therefore adapting the software used to demonstrate and test rolling shutter jitter correction to become part of the standard processing pipeline for EIS. The correction method will also apply to other rolling-shutter cameras, provided they have the operational flexibility to read out selected "check lines" at chosen times during the systematic readout of the frame area.
Acousto-optic laser projection systems for displaying TV information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gulyaev, Yu V; Kazaryan, M A; Mokrushin, Yu M
2015-04-30
This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulatorsmore » and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation. (review)« less
Optical Observation, Image-processing, and Detection of Space Debris in Geosynchronous Earth Orbit
NASA Astrophysics Data System (ADS)
Oda, H.; Yanagisawa, T.; Kurosaki, H.; Tagawa, M.
2014-09-01
We report on optical observations and an efficient detection method of space debris in the geosynchronous Earth orbit (GEO). We operate our new Australia Remote Observatory (ARO) where an 18 cm optical telescope with a charged-coupled device (CCD) camera covering a 3.14-degree field of view is used for GEO debris survey, and analyse datasets of successive CCD images using the line detection method (Yanagisawa and Nakajima 2005). In our operation, the exposure time of each CCD image is set to be 3 seconds (or 5 seconds), and the time interval of CCD shutter open is about 4.7 seconds (or 6.7 seconds). In the line detection method, a sufficient number of sample objects are taken from each image based on their shape and intensity, which includes not only faint signals but also background noise (we take 500 sample objects from each image in this paper). Then we search a sequence of sample objects aligning in a straight line in the successive images to exclude the noise sample. We succeed in detecting faint signals (down to about 1.8 sigma of background noise) by applying the line detection method to 18 CCD images. As a result, we detected about 300 GEO objects up to magnitude of 15.5 among 5 nights data. We also calculate orbits of objects detected using the Simplified General Perturbations Satellite Orbit Model 4(SGP4), and identify the objects listed in the two-line-element (TLE) data catalogue publicly provided by the U.S. Strategic Command (USSTRATCOM). We found that a certain amount of our detections are new objects that are not contained in the catalogue. We conclude that our ARO and detection method posse a high efficiency detection of GEO objects despite the use of comparatively-inexpensive observation and analysis system. We also describe the image-processing specialized for the detection of GEO objects (not for usual astronomical objects like stars) in this paper.
Processing, mosaicking and management of the Monterey Bay digital sidescan-sonar images
Chavez, P.S.; Isbrecht, J.; Galanis, P.; Gabel, G.L.; Sides, S.C.; Soltesz, D.L.; Ross, Stephanie L.; Velasco, M.G.
2002-01-01
Sidescan-sonar imaging systems with digital capabilities have now been available for approximately 20 years. In this paper we present several of the various digital image processing techniques developed by the U.S. Geological Survey (USGS) and used to apply intensity/radiometric and geometric corrections, as well as enhance and digitally mosaic, sidescan-sonar images of the Monterey Bay region. New software run by a WWW server was designed and implemented to allow very large image data sets, such as the digital mosaic, to be easily viewed interactively, including the ability to roam throughout the digital mosaic at the web site in either compressed or full 1-m resolution. The processing is separated into the two different stages: preprocessing and information extraction. In the preprocessing stage, sensor-specific algorithms are applied to correct for both geometric and intensity/radiometric distortions introduced by the sensor. This is followed by digital mosaicking of the track-line strips into quadrangle format which can be used as input to either visual or digital image analysis and interpretation. An automatic seam removal procedure was used in combination with an interactive digital feathering/stenciling procedure to help minimize tone or seam matching problems between image strips from adjacent track-lines. The sidescan-sonar image processing package is part of the USGS Mini Image Processing System (MIPS) and has been designed to process data collected by any 'generic' digital sidescan-sonar imaging system. The USGS MIPS software, developed over the last 20 years as a public domain package, is available on the WWW at: http://terraweb.wr.usgs.gov/trs/software.html.
NASA Technical Reports Server (NTRS)
1992-01-01
The GENETI-SCANNER, newest product of Perceptive Scientific Instruments, Inc. (PSI), rapidly scans slides, locates, digitizes, measures and classifies specific objects and events in research and diagnostic applications. Founded by former NASA employees, PSI's primary product line is based on NASA image processing technology. The instruments karyotype - a process employed in analysis and classification of chromosomes - using a video camera mounted on a microscope. Images are digitized, enabling chromosome image enhancement. The system enables karyotyping to be done significantly faster, increasing productivity and lowering costs. Product is no longer being manufactured.
Intelligence algorithms for autonomous navigation in a ground vehicle
NASA Astrophysics Data System (ADS)
Petkovsek, Steve; Shakya, Rahul; Shin, Young Ho; Gautam, Prasanna; Norton, Adam; Ahlgren, David J.
2012-01-01
This paper will discuss the approach to autonomous navigation used by "Q," an unmanned ground vehicle designed by the Trinity College Robot Study Team to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2011 competition, Q's intelligence was upgraded in several different areas, resulting in a more robust decision-making process and a more reliable system. In 2010-2011, the software of Q was modified to operate in a modular parallel manner, with all subtasks (including motor control, data acquisition from sensors, image processing, and intelligence) running simultaneously in separate software processes using the National Instruments (NI) LabVIEW programming language. This eliminated processor bottlenecks and increased flexibility in the software architecture. Though overall throughput was increased, the long runtime of the image processing process (150 ms) reduced the precision of Q's realtime decisions. Q had slow reaction times to obstacles detected only by its cameras, such as white lines, and was limited to slow speeds on the course. To address this issue, the image processing software was simplified and also pipelined to increase the image processing throughput and minimize the robot's reaction times. The vision software was also modified to detect differences in the texture of the ground, so that specific surfaces (such as ramps and sand pits) could be identified. While previous iterations of Q failed to detect white lines that were not on a grassy surface, this new software allowed Q to dynamically alter its image processing state so that appropriate thresholds could be applied to detect white lines in changing conditions. In order to maintain an acceptable target heading, a path history algorithm was used to deal with local obstacle fields and GPS waypoints were added to provide a global target heading. These modifications resulted in Q placing 5th in the autonomous challenge and 4th in the navigation challenge at IGVC.
Automated on-line fecal detection - digital eye guards against fecal contamination
USDA-ARS?s Scientific Manuscript database
Agricultural Research Service scientists in Athens, GA., have been granted a patent on a method to detect contaminants on food surfaces with imaging systems. Using a real-time imaging system in the processing plant, researchers Bob Windham, Kurt, Lawrence, Bosoon Park, and Doug Smith in the ARS Poul...
Lee, Sang-Hee; Lee, Minho; Kim, Hee-Jin
2014-10-01
We aimed to elucidate the tortuous course of the perioral artery with the aid of image processing, and to suggest accurate reference points for minimally invasive surgery. We used 59 hemifaces from 19 Korean and 20 Thai cadavers. A perioral line was defined to connect the point at which the facial artery emerged on the mandibular margin, and the ramification point of the lateral nasal artery and the inferior alar branch. The course of the perioral artery was reproduced as a graph based on the perioral line and analysed by adding the image of the artery using MATLAB. The course of the artery could be classified into 2 according to the course of the alar branch - oblique and vertical. Two distinct inflection points appeared in the course of the artery along the perioral line at the ramification points of the alar branch and the inferior labial artery, respectively, and the course of the artery across the face can be predicted based on the following references: the perioral line, the ramification point of the alar branch (5∼10 mm medial to the perioral line at the level of the lower third of the upper lip) and the inferior labial artery (5∼10 mm medial to the perioral line at the level of the middle of the lower lip). Copyright © 2014 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Postek, Michael T; Vladár, András E; Lowney, Jeremiah R; Keery, William J
2002-01-01
Traditional Monte Carlo modeling of the electron beam-specimen interactions in a scanning electron microscope (SEM) produces information about electron beam penetration and output signal generation at either a single beam-landing location, or multiple landing positions. If the multiple landings lie on a line, the results can be graphed in a line scan-like format. Monte Carlo results formatted as line scans have proven useful in providing one-dimensional information about the sample (e.g., linewidth). When used this way, this process is called forward line scan modeling. In the present work, the concept of image simulation (or the first step in the inverse modeling of images) is introduced where the forward-modeled line scan data are carried one step further to construct theoretical two-dimensional (2-D) micrographs (i.e., theoretical SEM images) for comparison with similar experimentally obtained micrographs. This provides an ability to mimic and closely match theory and experiment using SEM images. Calculated and/or measured libraries of simulated images can be developed with this technique. The library concept will prove to be very useful in the determination of dimensional and other properties of simple structures, such as integrated circuit parts, where the shape of the features is preferably measured from a single top-down image or a line scan. This paper presents one approach to the generation of 2-D simulated images and presents some suggestions as to their application to critical dimension metrology.
A new data processing technique for Rayleigh-Taylor instability growth experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Yongteng; Tu, Shaoyong; Miao, Wenyong
Typical face-on experiments for Rayleigh-Taylor instability study involve the time-resolved radiography of an accelerated foil with line-of-sight of the radiography along the direction of motion. The usual method which derives perturbation amplitudes from the face-on images reverses the actual image transmission procedure, so the obtained results will have a large error in the case of large optical depth. In order to improve the accuracy of data processing, a new data processing technique has been developed to process the face-on images. This technique based on convolution theorem, refined solutions of optical depth can be achieved by solving equations. Furthermore, we discussmore » both techniques for image processing, including the influence of modulation transfer function of imaging system and the backlighter spatial profile. Besides, we use the two methods to the process the experimental results in Shenguang-II laser facility and the comparison shows that the new method effectively improve the accuracy of data processing.« less
Green's function and image system for the Laplace operator in the prolate spheroidal geometry
NASA Astrophysics Data System (ADS)
Xue, Changfeng; Deng, Shaozhong
2017-01-01
In the present paper, electrostatic image theory is studied for Green's function for the Laplace operator in the case where the fundamental domain is either the exterior or the interior of a prolate spheroid. In either case, an image system is developed to consist of a point image inside the complement of the fundamental domain and an additional symmetric continuous surface image over a confocal prolate spheroid outside the fundamental domain, although the process of calculating such an image system is easier for the exterior than for the interior Green's function. The total charge of the surface image is zero and its centroid is at the origin of the prolate spheroid. In addition, if the source is on the focal axis outside the prolate spheroid, then the image system of the exterior Green's function consists of a point image on the focal axis and a line image on the line segment between the two focal points.
NASA Astrophysics Data System (ADS)
Schröter, B.; Buchroithner, M. F.; Pieczonka, T.
2015-12-01
Glaciers are characteristic elements of high mountain environments and represent key indicators for the ongoing climate change. The covering snowpack considerably affects the glacier-ice surface temperature and thus the meltdown of the glaciers which in recent decades has been accelerating worldwide. Therefore, the detailed investigation of the transient snow is of high importance. Zhadang Glacier is located in the Nyainqentanglha Mountain Range on the central part of the Tibetan Plateau (30°28.24' N, 90°38.69' E). The glacier is debris-free and flows from 6,090 to 5,515 m a.s.l. Recent measurements have shown that the whole glacier is below the ELA and experiences significant glacier volume loss. In May 2010 two terrestrial cameras had been installed there and were continuously operating until September 2012 generating 6,225 images of the glacier with a frequency of three resp. six images per day. In order to use this dataset for snow line mapping all images had to be georeferenced and orthorectified. The biggest challenge was the problem of shifting camera positions due to deformations of the ground and hence the offset in the image coordinates. This was resolved by combining the manual orthorectification of one image per week with a subsequent spline interpolation to determine the changed image coordinates. The actual orthorectification was finally realized by applying a fully automated batch processing of all images. The most favorable image of each day was chosen for the manual snow line mapping process. The final aim was the calculation of the mean elevation of the snow line for every day of the data collecting period materialized by intersecting the mapped snow lines with resampled SRTM 3 data. Considering the fact that there were several weeks either with full snow cover or without any snow this aim could be achieved. The findings have been used for the evaluation of a glacier mass balance model developed at RWTH Aachen, Germany, showing a high level of congruence. Another result is the proof of intense ablation due to snow drift and sublimation during the winter months. In 2014 a similar camera system was installed near Halji Glacier in northwestern Nepal on the southern edge of the Tibetan Plateau (30°15.80' N, 81°28.16' E; 5,730 - 5,270 m a.s.l.; ELA = 5,660 m a.s.l.). The images are currently being processed.
Comparison of portable and conventional ultrasound imaging in spinal curvature measurement
NASA Astrophysics Data System (ADS)
Yan, Christina; Tabanfar, Reza; Kempston, Michael; Borschneck, Daniel; Ungi, Tamas; Fichtinger, Gabor
2016-03-01
PURPOSE: In scoliosis monitoring, tracked ultrasound has been explored as a safer imaging alternative to traditional radiography. The use of ultrasound in spinal curvature measurement requires identification of vertebral landmarks, but bones have reduced visibility in ultrasound imaging and high quality ultrasound machines are often expensive and not portable. In this work, we investigate the image quality and measurement accuracy of a low cost and portable ultrasound machine in comparison to a standard ultrasound machine in scoliosis monitoring. METHODS: Two different kinds of ultrasound machines were tested on three human subjects, using the same position tracker and software. Spinal curves were measured in the same reference coordinate system using both ultrasound machines. Lines were defined by connecting two symmetric landmarks identified on the left and right transverse process of the same vertebrae, and spinal curvature was defined as the transverse process angle between two such lines, projected on the coronal plane. RESULTS: Three healthy volunteers were scanned by both ultrasound configurations. Three experienced observers localized transverse processes as skeletal landmarks and obtained transverse process angles in images obtained from both ultrasounds. The mean difference per transverse process angle measured was 3.00 +/-2.1°. 94% of transverse processes visualized in the Sonix Touch were also visible in the Telemed. Inter-observer error in the Telemed was 4.5° and 4.3° in the Sonix Touch. CONCLUSION: Price, convenience and accessibility suggest the Telemed to be a viable alternative in scoliosis monitoring, however further improvements in measurement protocol and image noise reduction must be completed before implementing the Telemed in the clinical setting.
Differential effects of cognitive load on emotion: Emotion maintenance versus passive experience.
DeFraine, William C
2016-06-01
Two separate lines of research have examined the effects of cognitive load on emotional processing with similar tasks but seemingly contradictory results. Some research has shown that the emotions elicited by passive viewing of emotional images are reduced by subsequent cognitive load. Other research has shown that such emotions are not reduced by cognitive load if the emotions are actively maintained. The present study sought to compare and resolve these 2 lines of research. Participants either passively viewed negative emotional images or maintained the emotions elicited by the images, and after a delay rated the intensity of the emotion they were feeling. Half of trials included a math task during the delay to induce cognitive load, and the other half did not. Results showed that cognitive load reduced the intensity of negative emotions during passive-viewing of emotional images but not during emotion maintenance. The present study replicates the findings of both lines of research, and shows that the key factor is whether or not emotions are actively maintained. Also, in the context of previous emotion maintenance research, the present results support the theoretical idea of a separable emotion maintenance process. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Inspecting rapidly moving surfaces for small defects using CNN cameras
NASA Astrophysics Data System (ADS)
Blug, Andreas; Carl, Daniel; Höfler, Heinrich
2013-04-01
A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.
Photoacoustic projection imaging using an all-optical detector array
NASA Astrophysics Data System (ADS)
Bauer-Marschallinger, J.; Felbermayer, K.; Berer, T.
2018-02-01
We present a prototype for all-optical photoacoustic projection imaging. By generating projection images, photoacoustic information of large volumes can be retrieved with less effort compared to common photoacoustic computed tomography where many detectors and/or multiple measurements are required. In our approach, an array of 60 integrating line detectors is used to acquire photoacoustic waves. The line detector array consists of fiber-optic MachZehnder interferometers, distributed on a cylindrical surface. From the measured variation of the optical path lengths of the interferometers, induced by photoacoustic waves, a photoacoustic projection image can be reconstructed. The resulting images represent the projection of the three-dimensional spatial light absorbance within the imaged object onto a two-dimensional plane, perpendicular to the line detector array. The fiber-optic detectors achieve a noise-equivalent pressure of 24 Pascal at a 10 MHz bandwidth. We present the operational principle, the structure of the array, and resulting images. The system can acquire high-resolution projection images of large volumes within a short period of time. Imaging large volumes at high frame rates facilitates monitoring of dynamic processes.
The 3D scanner prototype utilize object profile imaging using line laser and octave software
NASA Astrophysics Data System (ADS)
Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus
2016-11-01
Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.
NASA Astrophysics Data System (ADS)
Gilra, D. P.; Pwa, T. H.; Arnal, E. M.; de Vries, J.
1982-06-01
In order to process and analyze high resolution IUE data on a large number of interstellar lines in a large number of images for a large number of stars, computer programs were developed for 115 lines in the short wavelength range and 40 in the long wavelength range. Programs include extraction, processing, plotting, averaging, and profile fitting. Wavelength calibration in high resolution spectra, fixed pattern noise, instrument profile and resolution, and the background problem in the region where orders are crowding are discussed. All the expected lines are detected in at least one spectrum.
Robotic Vision-Based Localization in an Urban Environment
NASA Technical Reports Server (NTRS)
Mchenry, Michael; Cheng, Yang; Matthies
2007-01-01
A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.
Prediction of processing tomato peeling outcomes
USDA-ARS?s Scientific Manuscript database
Peeling outcomes of processing tomatoes were predicted using multivariate analysis of Magnetic Resonance (MR) images. Tomatoes were obtained from a whole-peel production line. Each fruit was imaged using a 7 Tesla MR system, and a multivariate data set was created from 28 different images. After ...
NASA Astrophysics Data System (ADS)
Lancaster, N.; LeBlanc, D.; Bebis, G.; Nicolescu, M.
2015-12-01
Dune-field patterns are believed to behave as self-organizing systems, but what causes the patterns to form is still poorly understood. The most obvious (and in many cases the most significant) aspect of a dune system is the pattern of dune crest lines. Extracting meaningful features such as crest length, orientation, spacing, bifurcations, and merging of crests from image data can reveal important information about the specific dune-field morphological properties, development, and response to changes in boundary conditions, but manual methods are labor-intensive and time-consuming. We are developing the capability to recognize and characterize patterns of sand dunes on planetary surfaces. Our goal is to develop a robust methodology and the necessary algorithms for automated or semi-automated extraction of dune morphometric information from image data. Our main approach uses image processing methods to extract gradient information from satellite images of dune fields. Typically, the gradients have a dominant magnitude and orientation. In many cases, the images have two major dominant gradient orientations, for the sunny and shaded side of the dunes. A histogram of the gradient orientations is used to determine the dominant orientation. A threshold is applied to the image based on gradient orientations which agree with the dominant orientation. The contours of the binary image can then be used to determine the dune crest-lines, based on pixel intensity values. Once the crest-lines have been extracted, the morphological properties can be computed. We have tested our approach on a variety of images of linear and crescentic (transverse) dunes and compared dune detection algorithms with manually-digitized dune crest lines, achieving true positive values of 0.57-0.99; and false positives values of 0.30-0.67, indicating that out approach is generally robust.
Dynamic electrical impedance imaging with the interacting multiple model scheme.
Kim, Kyung Youn; Kim, Bong Seok; Kim, Min Chan; Kim, Sin; Isaacson, David; Newell, Jonathan C
2005-04-01
In this paper, an effective dynamical EIT imaging scheme is presented for on-line monitoring of the abruptly changing resistivity distribution inside the object, based on the interacting multiple model (IMM) algorithm. The inverse problem is treated as a stochastic nonlinear state estimation problem with the time-varying resistivity (state) being estimated on-line with the aid of the IMM algorithm. In the design of the IMM algorithm multiple models with different process noise covariance are incorporated to reduce the modeling uncertainty. Simulations and phantom experiments are provided to illustrate the proposed algorithm.
Sopori, Bhushan; Rupnowski, Przemyslaw; Ulsh, Michael
2016-01-12
A monitoring system 100 comprising a material transport system 104 providing for the transportation of a substantially planar material 102, 107 through the monitoring zone 103 of the monitoring system 100. The system 100 also includes a line camera 106 positioned to obtain multiple line images across a width of the material 102, 107 as it is transported through the monitoring zone 103. The system 100 further includes an illumination source 108 providing for the illumination of the material 102, 107 transported through the monitoring zone 103 such that light reflected in a direction normal to the substantially planar surface of the material 102, 107 is detected by the line camera 106. A data processing system 110 is also provided in digital communication with the line camera 106. The data processing system 110 is configured to receive data output from the line camera 106 and further configured to calculate and provide substantially contemporaneous information relating to a quality parameter of the material 102, 107. Also disclosed are methods of monitoring a quality parameter of a material.
3D scan line method for identifying void fabric of granular materials
NASA Astrophysics Data System (ADS)
Theocharis, Alexandros I.; Vairaktaris, Emmanouil; Dafalias, Yannis F.
2017-06-01
Among other processes measuring the void phase of porous or fractured media, scan line approach is a simplified "graphical" method, mainly used in image processing related procedures. In soil mechanics, the application of scan line method is related to the soil fabric, which is important in characterizing the anisotropic mechanical response of soils. Void fabric is of particular interest, since graphical approaches are well defined experimentally and most of them can also be easily used in numerical experiments, like the scan line method. This is in contrast to the definition of fabric based on contact normal vectors that are extremely difficult to determine, especially considering physical experiments. The scan line method has been proposed by Oda et al [1] and implemented again by Ghedia and O'Sullivan [2]. A modified method based on DEM analysis instead of image measurements of fabric has been previously proposed and implemented by the authors in a 2D scheme [3-4]. In this work, a 3D extension of the modified scan line definition is presented using PFC 3D®. The results show clearly similar trends with the 2D case and the same behaviour of fabric anisotropy is presented.
Fast and accurate image recognition algorithms for fresh produce food safety sensing
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.
2011-06-01
This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.
3D Power Line Extraction from Multiple Aerial Images.
Oh, Jaehong; Lee, Changno
2017-09-29
Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters.
3D Power Line Extraction from Multiple Aerial Images
Lee, Changno
2017-01-01
Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters. PMID:28961204
Muhogora, Wilbroad E; Msaki, Peter; Padovani, Renato
2015-03-08
The objective of this study was to improve the visibility of anatomical details by applying off-line postimage processing in chest computed radiography (CR). Four spatial domain-based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann-Whitney U-test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005 ≤ p ≤ 0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60 ≤ kVp ≤ 70. However, there was no improvement for images acquired using 102 ≤ kVp ≤ 107 (0.127 ≤ p ≤ 0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists.
Msaki, Peter; Padovani, Renato
2015-01-01
The objective of this study was to improve the visibility of anatomical details by applying off‐line postimage processing in chest computed radiography (CR). Four spatial domain‐based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann‐Whitney U‐test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005≤p≤0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60≤kVp≤70. However, there was no improvement for images acquired using 102≤kVp≤107 (0.127≤p≤0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists. PACS number: 87.59.−e, 87.59.−B, 87.59.−bd PMID:26103165
Archival Study of Energetic Processes in the Upper Atmosphere of the Outer Planets
NASA Technical Reports Server (NTRS)
Ballester, Gilda E.; Harris, Walter M.
1998-01-01
We compare International Ultraviolet Explorer (IUE) spectral observations of Jupiter's UltraViolet (UV) aurora in H-Lyman alpha (H-Lya) and H2 emissions with images of the UV aurora with HST to make more realistic interpretations of the IUE dataset. Use the limited spatial information in the IUE line-by-line spectra of the bright H-Lya line emission in the form of pseudo-monochromatic images at the IUE 3.5 arcsec resolution (Lya pseudo-images), to derive information on the emissions. Analysing of H2 spectra of Saturn's UV aurora to infer atmospheric level of auroral excitation from the methane absorption (color ratios). Analysing of a Uranus IUE dataset to determine periodicity in the emissions attributable to auroral emission fixed in magnetic longitude. Reviewing of the results from IUE observations of the major planets, upper atmospheres and interactions with the planets magnetospheres. Analysing of IUE spectra of the UV emissions from Io to identify excitation processes and infer properties of the Io-torus-Jupiter system.
Geometric correction method for 3d in-line X-ray phase contrast image reconstruction
2014-01-01
Background Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. Methods To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Results Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. Conclusions The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques. PMID:25069768
Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Khinast, Johannes G
2014-05-13
Optical coherence tomography (OCT) is a contact-free non-destructive high-resolution imaging technique based on low-coherence interferometry. This study investigates the application of spectral-domain OCT as an in-line quality control tool for monitoring pharmaceutical film-coated tablets. OCT images of several commercially-available film-coated tablets of different shapes, formulations and coating thicknesses were captured off-line using two OCT systems with centre wavelengths of 830nm and 1325nm. Based on the off-line image evaluation, another OCT system operating at a shorter wavelength was selected to study the feasibility of OCT as an in-line monitoring method. Since in spectral-domain OCT motion artefacts can occur as a result of the tablet or sensor head movement, a basic understanding of the relationship between the tablet speed and the motion effects is essential for correct quantifying and qualifying of the tablet coating. Experimental data was acquired by moving the sensor head of the OCT system across a static tablet bed. Although examining the homogeneity of the coating turned more difficult with increasing transverse speed of the tablets, the determination of the coating thickness was still highly accurate at a speed up to 0.7m/s. The presented OCT setup enables the investigation of the intra- and inter-tablet coating uniformity in-line during the coating process. Copyright © 2014 Elsevier B.V. All rights reserved.
Kim, Kwang Baek; Park, Hyun Jun; Song, Doo Heon; Han, Sang-suk
2015-01-01
Ultrasound examination (US) does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases) in extracting appendix.
Using quantum filters to process images of diffuse axonal injury
NASA Astrophysics Data System (ADS)
Pineda Osorio, Mateo
2014-06-01
Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.
Metric Aspects of Digital Images and Digital Image Processing.
1984-09-01
produced in a reconstructed digital image. Synthesized aerial photographs were formed by processing a combined elevation and orthophoto data base. These...brightness values h1 and Iion b) a line equation whose two parameters are calculated h12, along with tile borderline that separates the two intensity
Combined neutron and x-ray imaging at the National Ignition Facility (invited)
Danly, C. R.; Christensen, K.; Fatherley, Valerie E.; ...
2016-10-11
X-ray and neutrons are commonly used to image Inertial Confinement Fusion implosions, providing key diagnostic information on the fuel assembly of burning DT fuel. The x-ray and neutron data provided are complementary as the production of neutrons and x-rays occur from different physical processes, but typically these two images are collected from different views with no opportunity for co-registration of the two images. Neutrons are produced where the DT fusion fuel is burning; X-rays are produced in regions corresponding to high temperatures. Processes such as mix of ablator material into the hotspot can result in increased x-ray production and decreasedmore » neutron production but can only be confidently observed if the two images are collected along the same line of sight and co-registered. To allow direct comparison of x-ray and neutron data, a Combined Neutron X-ray Imaging system has been tested at Omega and installed at the National Ignition Facility to collect an x-ray image along the currently installed neutron imaging line-of-sight. Here, this system is described, and initial results are presented along with prospects for definitive coregistration of the images.« less
Combined neutron and x-ray imaging at the National Ignition Facility (invited).
Danly, C R; Christensen, K; Fatherley, V E; Fittinghoff, D N; Grim, G P; Hibbard, R; Izumi, N; Jedlovec, D; Merrill, F E; Schmidt, D W; Simpson, R A; Skulina, K; Volegov, P L; Wilde, C H
2016-11-01
X-ray and neutrons are commonly used to image inertial confinement fusion implosions, providing key diagnostic information on the fuel assembly of burning deuterium-tritium (DT) fuel. The x-ray and neutron data provided are complementary as the production of neutrons and x-rays occurs from different physical processes, but typically these two images are collected from different views with no opportunity for co-registration of the two images. Neutrons are produced where the DT fusion fuel is burning; X-rays are produced in regions corresponding to high temperatures. Processes such as mix of ablator material into the hotspot can result in increased x-ray production and decreased neutron production but can only be confidently observed if the two images are collected along the same line of sight and co-registered. To allow direct comparison of x-ray and neutron data, a combined neutron x-ray imaging system has been tested at Omega and installed at the National Ignition Facility to collect an x-ray image along the currently installed neutron imaging line of sight. This system is described, and initial results are presented along with prospects for definitive coregistration of the images.
On-line range images registration with GPGPU
NASA Astrophysics Data System (ADS)
Będkowski, J.; Naruniec, J.
2013-03-01
This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.
Vision-based in-line fabric defect detection using yarn-specific shape features
NASA Astrophysics Data System (ADS)
Schneider, Dorian; Aach, Til
2012-01-01
We develop a methodology for automatic in-line flaw detection in industrial woven fabrics. Where state of the art detection algorithms apply texture analysis methods to operate on low-resolved ({200 ppi) image data, we describe here a process flow to segment single yarns in high-resolved ({1000 ppi) textile images. Four yarn shape features are extracted, allowing a precise detection and measurement of defects. The degree of precision reached allows a classification of detected defects according to their nature, providing an innovation in the field of automatic fabric flaw detection. The design has been carried out to meet real time requirements and face adverse conditions caused by loom vibrations and dirt. The entire process flow is discussed followed by an evaluation using a database with real-life industrial fabric images. This work pertains to the construction of an on-loom defect detection system to be used in manufacturing practice.
Digital image transformation and rectification of spacecraft and radar images
Wu, S.S.C.
1985-01-01
Digital image transformation and rectification can be described in three categories: (1) digital rectification of spacecraft pictures on workable stereoplotters; (2) digital correction of radar image geometry; and (3) digital reconstruction of shaded relief maps and perspective views including stereograms. Digital rectification can make high-oblique pictures workable on stereoplotters that would otherwise not accommodate such extreme tilt angles. It also enables panoramic line-scan geometry to be used to compile contour maps with photogrammetric plotters. Rectifications were digitally processed on both Viking Orbiter and Lander pictures of Mars as well as radar images taken by various radar systems. By merging digital terrain data with image data, perspective and three-dimensional views of Olympus Mons and Tithonium Chasma, also of Mars, are reconstructed through digital image processing. ?? 1985.
Concurrent-scene/alternate-pattern analysis for robust video-based docking systems
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol
1991-01-01
A typical docking target employs a three-point design of retroreflective tape, one at each endpoint of the center-line, and one on the tip of the central post. Scenes, sensed via laser diode illumination, produce pictures with spots corresponding to desired reflection from the retroreflectors and other reflections. Control corrections for each axis of the vehicle can then be properly applied if the desired spots are accurately tracked. However, initial acquisition of these three spots (detection and identification problem) are non-trivial under a severe noise environment. Signal-to-noise enhancement, accomplished by subtracting the non-illuminated scene from the target scene illuminated by laser diodes, can not eliminate every false spot. Hence, minimization of docking failures due to target mistracking would suggest needed inclusion of added processing features pertaining to target locations. In this paper, we present a concurrent processing scheme for a modified docking target scene which could lead to a perfect docking system. Since the non-illuminated target scene is already available, adding another feature to the three-point design by marking two non-reflective lines, one between the two end-points and one from the tip of the central post to the center-line, would allow this line feature to be picked-up only when capturing the background scene (sensor data without laser illumination). Therefore, instead of performing the image subtraction to generate a picture with a high signal-to-noise ratio, a processed line-image based on the robust line detection technique (Hough transform) can be used to fuse with the actively sensed three-point target image to deduce the true locations of the docking target. This dual-channel confirmation scheme is necessary if a fail-safe system is to be realized from both the sensing and processing point-of-views. Detailed algorithms and preliminary results are presented.
How Digital Image Processing Became Really Easy
NASA Astrophysics Data System (ADS)
Cannon, Michael
1988-02-01
In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.
Interactive full channel teletext system for cable television nets
NASA Astrophysics Data System (ADS)
Vandenboom, H. P. A.
1984-08-01
A demonstration set-up of an interactive full channel teletext (FCT) system for cable TV networks with two-way data communication possibilities was designed and realized. In FCT all image lines are used for teletext data lines. The FCT decoder was placed in the mini-star, and the FCT encoder which provides the FCT signal was placed in the local center. From the FCT signal a number of data lines are selected using an extra FCT decoder. They are placed on the image lines reserved for teletext so that a normal TV receiver equipped with a teletext decoder, can process the selected data lines. For texts not on hand in the FCT signal, a command can be sent to the local center via the data communication path. A cheap and simple system is offered in which the number of commanded pages or books is in principle unlimited, while the used waiting time and channel capacity is limited.
Performance prediction of optical image stabilizer using SVM for shaker-free production line
NASA Astrophysics Data System (ADS)
Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo
2016-04-01
Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.
Image data-processing system for solar astronomy
NASA Technical Reports Server (NTRS)
Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.
1977-01-01
The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.
Implicit multiplane 3D camera calibration matrices for stereo image processing
NASA Astrophysics Data System (ADS)
McKee, James W.; Burgett, Sherrie J.
1997-12-01
By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173
Hexagonal Pixels and Indexing Scheme for Binary Images
NASA Technical Reports Server (NTRS)
Johnson, Gordon G.
2004-01-01
A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg, and 120deg to the original horizontal coordinate axis. The net result is that one has checked for line segments at angular intervals of 30deg. For even finer angular resolution, one could, for example, then rotate the rectangular-grid image +/-45deg before sampling to perform checking for line segments at angular intervals of 15deg.
Object extraction method for image synthesis
NASA Astrophysics Data System (ADS)
Inoue, Seiki
1991-11-01
The extraction of component objects from images is fundamentally important for image synthesis. In TV program production, one useful method is the Video-Matte technique for specifying the necessary boundary of an object. This, however, involves some manually intricate and tedious processes. A new method proposed in this paper can reduce the needed level of operator skill and simplify object extraction. The object is automatically extracted by just a simple drawing of a thick boundary line. The basic principle involves a thinning of the thick boundary line binary image using the edge intensity of the original image. This method has many practical advantages, including the simplicity of specifying an object, the high accuracy of thinned-out boundary line, its ease of application to moving images, and the lack of any need for adjustment.
The Müller-Lyer Illusion in a Computational Model of Biological Object Recognition
Zeman, Astrid; Obst, Oliver; Brooks, Kevin R.; Rich, Anina N.
2013-01-01
Studying illusions provides insight into the way the brain processes information. The Müller-Lyer Illusion (MLI) is a classical geometrical illusion of size, in which perceived line length is decreased by arrowheads and increased by arrowtails. Many theories have been put forward to explain the MLI, such as misapplied size constancy scaling, the statistics of image-source relationships and the filtering properties of signal processing in primary visual areas. Artificial models of the ventral visual processing stream allow us to isolate factors hypothesised to cause the illusion and test how these affect classification performance. We trained a feed-forward feature hierarchical model, HMAX, to perform a dual category line length judgment task (short versus long) with over 90% accuracy. We then tested the system in its ability to judge relative line lengths for images in a control set versus images that induce the MLI in humans. Results from the computational model show an overall illusory effect similar to that experienced by human subjects. No natural images were used for training, implying that misapplied size constancy and image-source statistics are not necessary factors for generating the illusion. A post-hoc analysis of response weights within a representative trained network ruled out the possibility that the illusion is caused by a reliance on information at low spatial frequencies. Our results suggest that the MLI can be produced using only feed-forward, neurophysiological connections. PMID:23457510
Rapid trench initiated recrystallization and stagnation in narrow Cu interconnect lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Brendan B.; Rizzolo, Michael; Prestowitz, Luke C.
2015-10-26
Understanding and ultimately controlling the self-annealing of Cu in narrow interconnect lines has remained a top priority in order to continue down-scaling of back-end of the line interconnects. Recently, it was hypothesized that a bottom-up microstructural transformation process in narrow interconnect features competes with the surface-initiated overburden transformation. Here, a set of transmission electron microscopy images which captures the grain coarsening process in 48 nm lines in a time resolved manner is presented, supporting such a process. Grain size measurements taken from these images have demonstrated that the Cu microstructural transformation in 48 nm interconnect lines stagnates after only 1.5 h atmore » room temperature. This stubborn metastable structure remains stagnant, even after aggressive elevated temperature anneals, suggesting that a limited internal energy source such as dislocation content is driving the transformation. As indicated by the extremely low defect density found in 48 nm trenches, a rapid recrystallization process driven by annihilation of defects in the trenches appears to give way to a metastable microstructure in the trenches.« less
Fusing Image Data for Calculating Position of an Object
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Cheng, Yang; Liebersbach, Robert; Trebi-Ollenu, Ashitey
2007-01-01
A computer program has been written for use in maintaining the calibration, with respect to the positions of imaged objects, of a stereoscopic pair of cameras on each of the Mars Explorer Rovers Spirit and Opportunity. The program identifies and locates a known object in the images. The object in question is part of a Moessbauer spectrometer located at the tip of a robot arm, the kinematics of which are known. In the program, the images are processed through a module that extracts edges, combines the edges into line segments, and then derives ellipse centroids from the line segments. The images are also processed by a feature-extraction algorithm that performs a wavelet analysis, then performs a pattern-recognition operation in the wavelet-coefficient space to determine matches to a texture feature measure derived from the horizontal, vertical, and diagonal coefficients. The centroids from the ellipse finder and the wavelet feature matcher are then fused to determine co-location. In the event that a match is found, the centroid (or centroids if multiple matches are present) is reported. If no match is found, the process reports the results of the analyses for further examination by human experts.
NASA Technical Reports Server (NTRS)
Colvocoresses, A. P. (Principal Investigator)
1980-01-01
Graphics are presented which show HCMM mapped water-surface temperature in Lake Anna, a 13,000 dendrically-shaped lake which provides cooling for a nuclear power plant in Virginia. The HCMM digital data, produced by NASA were processed by NOAA/NESS into image and line-printer form. A LANDSAT image of the lake illustrates the relationship between MSS band 7 data and the HCMM data as processed by the NASA image processing facility which transforms the data to the same distortion-free hotline oblique Mercator projection. Spatial correlation of the two images is relatively simple by either digital or analog means and the HCMM image has a potential accuracy approaching the 80 m of the original LANDSAT data. While it is difficult to get readings that are not diluted by radiation from cooler adjacent land areas in narrow portions of the lake, digital data indicated by the line-printer display five different temperatures for open-water areas. Where the water surface response was not diluted by land areas, the temperature difference recorded by HCMM corresponds to in situ readings with rsme on the order of 1 C.
Correction And Use Of Jitter In Television Images
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.
1989-01-01
Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.
Feature Sets for Screenshot Detection
2013-06-01
Jakubowicz, J.-M. Morel, and G. Randall, “ LSD : a Line Segment Detector,” Image Processing On Line, 2012. [24] A. Halder, N. Chatterjee, A. Kar, S. Pal, and S...273–297, Sep. 1995. [Online]. Available: http://dx.doi.org/10.1023/A:1022627411411 [31] N. Cristianini and J. Shawe -Taylor, An introduction to support
Reprocessing of multi-channel seismic-reflection data collected in the Beaufort Sea
Agena, W.F.; Lee, Myung W.; Hart, P.E.
2000-01-01
Contained on this set of two CD-ROMs are stacked and migrated multi-channel seismic-reflection data for 65 lines recorded in the Beaufort Sea by the United States Geological Survey in 1977. All data were reprocessed by the USGS using updated processing methods resulting in improved interpretability. Each of the two CD-ROMs contains the following files: 1) 65 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 65 lines in standard SEG-P1 format; 3) an ASCII text file with cross-reference information for relating the sequential trace numbers on each line to cdp numbers and shotpoint numbers; 4) 2 small scale graphic images (stacked and migrated) of a segment of line 722 in Adobe Acrobat (R) PDF format; 5) a graphic image of the location map, generated from the navigation file; 6) PlotSeis, an MS-DOS Application that allows PC users to interactively view the SEG-Y files; 7) a PlotSeis documentation file; and 8) an explanation of the processing used to create the final seismic sections (this document).
High speed three-dimensional laser scanner with real time processing
NASA Technical Reports Server (NTRS)
Lavelle, Joseph P. (Inventor); Schuet, Stefan R. (Inventor)
2008-01-01
A laser scanner computes a range from a laser line to an imaging sensor. The laser line illuminates a detail within an area covered by the imaging sensor, the area having a first dimension and a second dimension. The detail has a dimension perpendicular to the area. A traverse moves a laser emitter coupled to the imaging sensor, at a height above the area. The laser emitter is positioned at an offset along the scan direction with respect to the imaging sensor, and is oriented at a depression angle with respect to the area. The laser emitter projects the laser line along the second dimension of the area at a position where a image frame is acquired. The imaging sensor is sensitive to laser reflections from the detail produced by the laser line. The imaging sensor images the laser reflections from the detail to generate the image frame. A computer having a pipeline structure is connected to the imaging sensor for reception of the image frame, and for computing the range to the detail using height, depression angle and/or offset. The computer displays the range to the area and detail thereon covered by the image frame.
The imaging system design of three-line LMCCD mapping camera
NASA Astrophysics Data System (ADS)
Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da
2011-08-01
In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.
Slot angle detecting method for fiber fixed chip
NASA Astrophysics Data System (ADS)
Zhang, Jiaquan; Wang, Jiliang; Zhou, Chaochao
2018-04-01
The slot angle of fiber fixed chip has a significant impact on performance of photoelectric devices. In order to solve the actual engineering problem, this paper put forward a detecting method based on imaging processing. Because the images have very low contrast that is hardly segmented, so this paper proposes imaging segment methods based on edge character. Then get fixed chip edge line slope k2 and calculate the fiber fixed slot line slope k1, which can be used calculating the slot angle. Lastly, test the repeatability and accuracy of system, which show that this method has very fast operation speed and good robustness. Clearly, it is also satisfied to the actual demand of fiber fixed chip slot angle detection.
JPRS Report, Science & Technology, Japan, 27th Aircraft Symposium
1990-10-29
screen; the relative attitude is then determined . 2) Video Sensor System Specific patterns (grapple target, etc.) drawn on the target spacecraft , or the...entire target spacecraft , is imaged by camera . Navigation information is obtained by on-board image processing, such as extraction of contours and...standard figure called "grapple target" located in the vicinity of the grapple fixture on the target spacecraft is imaged by camera . Contour lines and
Comprehensive analysis of line-edge and line-width roughness for EUV lithography
NASA Astrophysics Data System (ADS)
Bonam, Ravi; Liu, Chi-Chun; Breton, Mary; Sieg, Stuart; Seshadri, Indira; Saulnier, Nicole; Shearer, Jeffrey; Muthinti, Raja; Patlolla, Raghuveer; Huang, Huai
2017-03-01
Pattern transfer fidelity is always a major challenge for any lithography process and needs continuous improvement. Lithographic processes in semiconductor industry are primarily driven by optical imaging on photosensitive polymeric material (resists). Quality of pattern transfer can be assessed by quantifying multiple parameters such as, feature size uniformity (CD), placement, roughness, sidewall angles etc. Roughness in features primarily corresponds to variation of line edge or line width and has gained considerable significance, particularly due to shrinking feature sizes and variations of features in the same order. This has caused downstream processes (Etch (RIE), Chemical Mechanical Polish (CMP) etc.) to reconsider respective tolerance levels. A very important aspect of this work is relevance of roughness metrology from pattern formation at resist to subsequent processes, particularly electrical validity. A major drawback of current LER/LWR metric (sigma) is its lack of relevance across multiple downstream processes which effects material selection at various unit processes. In this work we present a comprehensive assessment of Line Edge and Line Width Roughness at multiple lithographic transfer processes. To simulate effect of roughness a pattern was designed with periodic jogs on the edges of lines with varying amplitudes and frequencies. There are numerous methodologies proposed to analyze roughness and in this work we apply them to programmed roughness structures to assess each technique's sensitivity. This work also aims to identify a relevant methodology to quantify roughness with relevance across downstream processes.
Optical Inspection In Hostile Industrial Environments: Single-Sensor VS. Imaging Methods
NASA Astrophysics Data System (ADS)
Cielo, P.; Dufour, M.; Sokalski, A.
1988-11-01
On-line and unsupervised industrial inspection for quality control and process monitoring is increasingly required in the modern automated factory. Optical techniques are particularly well suited to industrial inspection in hostile environments because of their noncontact nature, fast response time and imaging capabilities. Optical sensors can be used for remote inspection of high temperature products or otherwise inaccessible parts, provided they are in a line-of-sight relation with the sensor. Moreover, optical sensors are much easier to adapt to a variety of part shapes, position or orientation and conveyor speeds as compared to contact-based sensors. This is an important requirement in a flexible automation environment. A number of choices are possible in the design of optical inspection systems. General-purpose two-dimensional (2-D) or three-dimensional (3-D) imaging techniques have advanced very rapidly in the last years thanks to a substantial research effort as well as to the availability of increasingly powerful and affordable hardware and software. Imaging can be realized using 2-D arrays or simpler one-dimensional (1-D) line-array detectors. Alternatively, dedicated single-spot sensors require a smaller amount of data processing and often lead to robust sensors which are particularly appropriate to on-line operation in hostile industrial environments. Many specialists now feel that dedicated sensors or clusters of sensors are often more effective for specific industrial automation and control tasks, at least in the short run. This paper will discuss optomechanical and electro-optical choices with reference to the design of a number of on-line inspection sensors which have been recently developed at our institute. Case studies will include real-time surface roughness evaluation on polymer cables extruded at high speed, surface characterization of hot-rolled or galvanized-steel sheets, temperature evaluation and pinhole detection in aluminum foil, multi-wavelength polymer sheet thickness gauging and thermographic imaging, 3-D lumber profiling, line-array inspection of textiles and glassware, as well as on-line optical inspection for the control of automated arc welding. In each case the design choices between single or multiple-element detectors, mechanical vs. electronic scanning, laser vs. incoherent illumination, etc. will be discussed in terms of industrial constraints such as speed requirements, protection against the environment or reliability of the sensor output.
NASA Astrophysics Data System (ADS)
Kuo, Chung-Feng Jeffrey; Lai, Chun-Yu; Kao, Chih-Hsiang; Chiu, Chin-Hsun
2018-05-01
In order to improve the current manual inspection and classification process for polarizing film on production lines, this study proposes a high precision automated inspection and classification system for polarizing film, which is used for recognition and classification of four common defects: dent, foreign material, bright spot, and scratch. First, the median filter is used to remove the impulse noise in the defect image of polarizing film. The random noise in the background is smoothed by the improved anisotropic diffusion, while the edge detail of the defect region is sharpened. Next, the defect image is transformed by Fourier transform to the frequency domain, combined with a Butterworth high pass filter to sharpen the edge detail of the defect region, and brought back by inverse Fourier transform to the spatial domain to complete the image enhancement process. For image segmentation, the edge of the defect region is found by Canny edge detector, and then the complete defect region is obtained by two-stage morphology processing. For defect classification, the feature values, including maximum gray level, eccentricity, the contrast, and homogeneity of gray level co-occurrence matrix (GLCM) extracted from the images, are used as the input of the radial basis function neural network (RBFNN) and back-propagation neural network (BPNN) classifier, 96 defect images are then used as training samples, and 84 defect images are used as testing samples to validate the classification effect. The result shows that the classification accuracy by using RBFNN is 98.9%. Thus, our proposed system can be used by manufacturing companies for a higher yield rate and lower cost. The processing time of one single image is 2.57 seconds, thus meeting the practical application requirement of an industrial production line.
Chhatbar, Pratik Y.; Kara, Prakash
2013-01-01
Neural activity leads to hemodynamic changes which can be detected by functional magnetic resonance imaging (fMRI). The determination of blood flow changes in individual vessels is an important aspect of understanding these hemodynamic signals. Blood flow can be calculated from the measurements of vessel diameter and blood velocity. When using line-scan imaging, the movement of blood in the vessel leads to streaks in space-time images, where streak angle is a function of the blood velocity. A variety of methods have been proposed to determine blood velocity from such space-time image sequences. Of these, the Radon transform is relatively easy to implement and has fast data processing. However, the precision of the velocity measurements is dependent on the number of Radon transforms performed, which creates a trade-off between the processing speed and measurement precision. In addition, factors like image contrast, imaging depth, image acquisition speed, and movement artifacts especially in large mammals, can potentially lead to data acquisition that results in erroneous velocity measurements. Here we show that pre-processing the data with a Sobel filter and iterative application of Radon transforms address these issues and provide more accurate blood velocity measurements. Improved signal quality of the image as a result of Sobel filtering increases the accuracy and the iterative Radon transform offers both increased precision and an order of magnitude faster implementation of velocity measurements. This algorithm does not use a priori knowledge of angle information and therefore is sensitive to sudden changes in blood flow. It can be applied on any set of space-time images with red blood cell (RBC) streaks, commonly acquired through line-scan imaging or reconstructed from full-frame, time-lapse images of the vasculature. PMID:23807877
An approach of point cloud denoising based on improved bilateral filtering
NASA Astrophysics Data System (ADS)
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
NASA Astrophysics Data System (ADS)
Kim, Myoung-Soo; Kim, Hyoung-Gi; Kim, Hyeong-Soo; Baik, Ki-Ho; Johnson, Donald W.; Cernigliaro, George J.; Minsek, David W.
1999-06-01
Thin film imaging processes such as top surface imaging (TSI) are candidates for sub-150 nm lithography using 193 nm lithography. Single component, non-chemically amplified, positive tone TSI photoresists based on phenolic polymers demonstrate good post-etch contrast, resolution, and minimal line edge roughness, in addition to being the most straightforward thin film imaging approach. In this approach, ArF laser exposure results directly in radiation- induced crosslinking of the phenolic polymer, followed by formation of a thin etch mask at the surface of the un- exposed regions by vapor-phase silylation, followed by reactive ion etching of the non-silylated regions. However, single component resists based on poly(para-hydroxystryene) (PHS), such as MicroChem's Nano MX-P7, suffer from slow photospeed as well as low silylation contrast which can cause reproducibility and line-edge-roughness problems. We report that selected aromatic substitution of the poly(para- hydroxystryene) polymer can increase the photospeed by up to a factor of four relative to un-substituted PHS. In this paper we report the synthesis and lithographic evaluations of four experimental TSI photoresists. MX-EX-1, MX-EX-2, MX- EX-3 and MX-EX-4 are non-chemically amplified resists based on aromatic substitutions of chloro- and hydroxymethyl- groups and PHS. We report optimized lithographic processing conditions, line edge roughness, silylation contrast, and compare the results to the parent PHS photoresist.
EROS Data Center Landsat digital enhancement techniques and imagery availability
Rohde, Wayne G.; Lo, Jinn Kai; Pohl, Russell A.
1978-01-01
The US Geological Survey's EROS Data Center (EDC) is experimenting with the production of digitally enhanced Landsat imagery. Advanced digital image processing techniques are used to perform geometric and radiometric corrections and to perform contrast and edge enhancements. The enhanced image product is produced from digitally preprocessed Landsat computer compatible tapes (CCTs) on a laser beam film recording system. Landsat CCT data have several geometric distortions which are corrected when NASA produces the standard film products. When producing film images from CCT's, geometric correction of the data is required. The EDC Digital Image Enhancement System (EDIES) compensates for geometric distortions introduced by Earth's rotation, variable line length, non-uniform mirror scan velocity, and detector misregistration. Radiometric anomalies such as bad data lines and striping are common to many Landsat film products and are also in the CCT data. Bad data lines or line segments with more than 150 contiguous bad pixels are corrected by inserting data from the previous line in place of the bad data. Striping, caused by variations in detector gain and offset, is removed with a destriping algorithm applied after digitally enhancing the data. Image enhancement is performed by applying a linear contrast stretch and an edge enhancement algorithm. The linear contrast enhancement algorithm is designed to expand digitally the full range of useful data recorded on the CCT over the range of 256 digital counts. This minimizes the effect of atmospheric scattering and saturates the relative brightness of highly reflecting features such as clouds or snow. It is the intent that no meaningful terrain data are eliminated by the digital processing. The edge enhancement algorithm is designed to enhance boundaries between terrain features that exhibit subtle differences in brightness values along edges of features. After the digital data have been processed, data for each Landsat band are recorded on black-and-white film with a laser beam film recorder (LBR). The LBR corrects for aspect ratio distortions as the digital data are recorded on the recording film over a preselected density range. Positive transparencies of MSS bands 4, 5, and 7 produced by the LBR are used to make color composite transparencies. Color film positives are made photographically from first generation black-and-white products generated on the LBR.
Hyperspectral imaging to identify salt-tolerant wheat lines
NASA Astrophysics Data System (ADS)
Moghimi, Ali; Yang, Ce; Miller, Marisa E.; Kianian, Shahryar; Marchetto, Peter
2017-05-01
In order to address the worldwide growing demand for food, agriculture is facing certain challenges and limitations. One of the important threats limiting crop productivity is salinity. Identifying salt tolerate varieties is crucial to mitigate the negative effects of this abiotic stress in agricultural production systems. Traditional measurement methods of this stress, such as biomass retention, are labor intensive, environmentally influenced, and often poorly correlated to salinity stress alone. In this study, hyperspectral imaging, as a non-destructive and rapid method, was utilized to expedite the process of identifying relatively the most salt tolerant line among four wheat lines including Triticum aestivum var. Kharchia, T. aestivum var. Chinese Spring, (Ae. columnaris) T. aestivum var. Chinese Spring, and (Ae. speltoides) T. aestivum var. Chinese Spring. To examine the possibility of early detection of a salt tolerant line, image acquisition was started one day after stress induction and continued on three, seven, and 12 days after adding salt. Simplex volume maximization (SiVM) method was deployed to detect superior wheat lines in response to salt stress. The results of analyzing images taken as soon as one day after salt induction revealed that Kharchia and (columnaris)Chinese Spring are the most tolerant wheat lines, while (speltoides) Chinese Spring was a moderately susceptible, and Chinese Spring was a relatively susceptible line to salt stress. These results were confirmed with the measuring biomass performed several weeks later.
Phase correction system for automatic focusing of synthetic aperture radar
Eichel, Paul H.; Ghiglia, Dennis C.; Jakowatz, Jr., Charles V.
1990-01-01
A phase gradient autofocus system for use in synthetic aperture imaging accurately compensates for arbitrary phase errors in each imaged frame by locating highlighted areas and determining the phase disturbance or image spread associated with each of these highlight areas. An estimate of the image spread for each highlighted area in a line in the case of one dimensional processing or in a sector, in the case of two-dimensional processing, is determined. The phase error is determined using phase gradient processing. The phase error is then removed from the uncorrected image and the process is iteratively performed to substantially eliminate phase errors which can degrade the image.
NASA Astrophysics Data System (ADS)
Hayashi, Tatsuro; Zhou, Xiangrong; Chen, Huayue; Hara, Takeshi; Miyamoto, Kei; Kobayashi, Tatsunori; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi
2010-03-01
X-ray CT images have been widely used in clinical routine in recent years. CT images scanned by a modern CT scanner can show the details of various organs and tissues. This means various organs and tissues can be simultaneously interpreted on CT images. However, CT image interpretation requires a lot of time and energy. Therefore, support for interpreting CT images based on image-processing techniques is expected. The interpretation of the spinal curvature is important for clinicians because spinal curvature is associated with various spinal disorders. We propose a quantification scheme of the spinal curvature based on the center line of spinal canal on CT images. The proposed scheme consists of four steps: (1) Automated extraction of the skeletal region based on CT number thresholding. (2) Automated extraction of the center line of spinal canal. (3) Generation of the median plane image of spine, which is reformatted based on the spinal canal. (4) Quantification of the spinal curvature. The proposed scheme was applied to 10 cases, and compared with the Cobb angle that is commonly used by clinicians. We found that a high-correlation (for the 95% confidence interval, lumbar lordosis: 0.81-0.99) between values obtained by the proposed (vector) method and Cobb angle. Also, the proposed method can provide the reproducible result (inter- and intra-observer variability: within 2°). These experimental results suggested a possibility that the proposed method was efficient for quantifying the spinal curvature on CT images.
High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images
NASA Astrophysics Data System (ADS)
Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
2006-10-01
Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.
The fast and accurate 3D-face scanning technology based on laser triangle sensors
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin
2013-08-01
A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.
Line drawing extraction from gray level images by feature integration
NASA Astrophysics Data System (ADS)
Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.
1994-10-01
We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).
Wood industrial application for quality control using image processing
NASA Astrophysics Data System (ADS)
Ferreira, M. J. O.; Neves, J. A. C.
1994-11-01
This paper describes an application of image processing for the furniture industry. It uses an input data, images acquired directly from wood planks where defects were previously marked by an operator. A set of image processing algorithms separates and codes each defect and detects a polygonal approach of the line representing them. For such a purpose we developed a pattern classification algorithm and a new technique of segmenting defects by carving the convex hull of the binary shape representing each isolated defect.
Seamless contiguity method for parallel segmentation of remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Geng; Wang, Guanghui; Yu, Mei; Cui, Chengling
2015-12-01
Seamless contiguity is the key technology for parallel segmentation of remote sensing data with large quantities. It can be effectively integrate fragments of the parallel processing into reasonable results for subsequent processes. There are numerous methods reported in the literature for seamless contiguity, such as establishing buffer, area boundary merging and data sewing. et. We proposed a new method which was also based on building buffers. The seamless contiguity processes we adopt are based on the principle: ensuring the accuracy of the boundary, ensuring the correctness of topology. Firstly, block number is computed based on data processing ability, unlike establishing buffer on both sides of block line, buffer is established just on the right side and underside of the line. Each block of data is segmented respectively and then gets the segmentation objects and their label value. Secondly, choose one block(called master block) and do stitching on the adjacent blocks(called slave block), process the rest of the block in sequence. Through the above processing, topological relationship and boundaries of master block are guaranteed. Thirdly, if the master block polygons boundaries intersect with buffer boundary and the slave blocks polygons boundaries intersect with block line, we adopt certain rules to merge and trade-offs them. Fourthly, check the topology and boundary in the buffer area. Finally, a set of experiments were conducted and prove the feasibility of this method. This novel seamless contiguity algorithm provides an applicable and practical solution for efficient segmentation of massive remote sensing image.
Space Radar Image of Kilauea, Hawaii - Interferometry 1
1999-05-01
This X-band image of the volcano Kilauea was taken on October 4, 1994, by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar. The area shown is about 9 kilometers by 13 kilometers (5.5 miles by 8 miles) and is centered at about 19.58 degrees north latitude and 155.55 degrees west longitude. This image and a similar image taken during the first flight of the radar instrument on April 13, 1994 were combined to produce the topographic information by means of an interferometric process. This is a process by which radar data acquired on different passes of the space shuttle is overlaid to obtain elevation information. Three additional images are provided showing an overlay of radar data with interferometric fringes; a three-dimensional image based on altitude lines; and, finally, a topographic view of the region. http://photojournal.jpl.nasa.gov/catalog/PIA01763
Real-time multiple-look synthetic aperture radar processor for spacecraft applications
NASA Technical Reports Server (NTRS)
Wu, C.; Tyree, V. C. (Inventor)
1981-01-01
A spaceborne synthetic aperture radar (SAR) having pipeline multiple-look data processing is described which makes use of excessive azimuth bandwidth in radar echo signals to produce multiple-looking images. Time multiplexed single-look image lines from an azimuth correlator go through an energy analyzer which analyzes the mean energy in each separate look to determine the radar antenna electric boresight for use in generating the correct reference functions for the production of high quality SAR images. The multiplexed single look image lines also go through a registration delay to produce multi-look images.
Quantification of fibre polymerization through Fourier space image analysis
Nekouzadeh, Ali; Genin, Guy M.
2011-01-01
Quantification of changes in the total length of randomly oriented and possibly curved lines appearing in an image is a necessity in a wide variety of biological applications. Here, we present an automated approach based upon Fourier space analysis. Scaled, band-pass filtered power spectral densities of greyscale images are integrated to provide a quantitative measurement of the total length of lines of a particular range of thicknesses appearing in an image. A procedure is presented to correct for changes in image intensity. The method is most accurate for two-dimensional processes with fibres that do not occlude one another. PMID:24959096
NASA Astrophysics Data System (ADS)
Baroni, Travis C.; Griffin, Brendan J.; Browne, James R.; Lincoln, Frank J.
2000-01-01
Charge contrast images (CCI) of synthetic gibbsite obtained on an environmental scanning electron microscope gives information on the crystallization process. Furthermore, X-ray mapping of the same grains shows that impurities are localized during the initial stages of growth and that the resulting composition images have features similar to these observed in CCI. This suggests a possible correlation between impurity distributions and the emission detected during CCI. X-ray line profiles, simulating the spatial distribution of impurities derived from the Monte Carlo program CASINO, have been compared with experimental line profiles and give an estimate of the localization. The model suggests that a main impurity, Ca, is depleted from the solution within approximately 3 4 [mu]m of growth.
Kriete, A; Schäffer, R; Harms, H; Aus, H M
1987-06-01
Nuclei of the cells from the thyroid gland were analyzed in a transmission electron microscope by direct TV scanning and on-line image processing. The method uses the advantages of a visual-perception model to detect structures in noisy and low-contrast images. The features analyzed include area, a form factor and texture parameters from the second derivative stage. Three tumor-free thyroid tissues, three follicular adenomas, three follicular carcinomas and three papillary carcinomas were studied. The computer-aided cytophotometric method showed that the most significant differences were the statistics of the chromatin texture features of homogeneity and regularity. These findings document the possibility of an automated differentiation of tumors at the ultrastructural level.
Scalable software architecture for on-line multi-camera video processing
NASA Astrophysics Data System (ADS)
Camplani, Massimo; Salgado, Luis
2011-03-01
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.
The sequence measurement system of the IR camera
NASA Astrophysics Data System (ADS)
Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo
2011-08-01
Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Hunter, Stanley D.; Hanu, Andrei R.; Sheets, Teresa B.
2016-01-01
Richard O. Duda and Peter E. Hart of Stanford Research Institute in [1] described the recurring problem in computer image processing as the detection of straight lines in digitized images. The problem is to detect the presence of groups of collinear or almost collinear figure points. It is clear that the problem can be solved to any desired degree of accuracy by testing the lines formed by all pairs of points. However, the computation required for n=NxM points image is approximately proportional to n2 or O(n2), becoming prohibitive for large images or when data processing cadence time is in milliseconds. Rosenfeld in [2] described an ingenious method due to Hough [3] for replacing the original problem of finding collinear points by a mathematically equivalent problem of finding concurrent lines. This method involves transforming each of the figure points into a straight line in a parameter space. Hough chose to use the familiar slope-intercept parameters, and thus his parameter space was the two-dimensional slope-intercept plane. A parallel Hough transform running on multi-core processors was elaborated in [4]. There are many other proposed methods of solving a similar problem, such as sampling-up-the-ramp algorithm (SUTR) [5] and algorithms involving artificial swarm intelligence techniques [6]. However, all state-of-the-art algorithms lack in real time performance. Namely, they are slow for large images that require performance cadence of a few dozens of milliseconds (50ms). This problem arises in spaceflight applications such as near real-time analysis of gamma ray measurements contaminated by overwhelming amount of traces of cosmic rays (CR). Future spaceflight instruments such as the Advanced Energetic Pair Telescope instrument (AdEPT) [7-9] for cosmos gamma ray survey employ large detector readout planes registering multitudes of cosmic ray interference events and sparse science gamma ray event traces' projections. The AdEPT science of interest is in the gamma ray events and the problem is to detect and reject the much more voluminous cosmic ray projections, so that the remaining science data can be telemetered to the ground over the constrained communication link. The state-of-the-art in cosmic rays detection and rejection does not provide an adequate computational solution. This paper presents a novel approach to the AdEPT on-board data processing burdened with the CR detection top pole bottleneck problem. This paper is introducing the data processing object, demonstrates object segmentation and distribution for processing among many processing elements (PEs) and presents solution algorithm for the processing bottleneck - the CR-Algorithm. The algorithm is based on the a priori knowledge that a CR pierces the entire instrument pressure vessel. This phenomenon is also the basis for a straightforward CR simulator, allowing the CR-Algorithm performance testing. Parallel processing of the readout image's (2(N+M) - 4) peripheral voxels is detecting all CRs, resulting in O(n) computational complexity. This algorithm near real-time performance is making AdEPT class spaceflight instruments feasible.
A fractal concentration area method for assigning a color palette for image representation
NASA Astrophysics Data System (ADS)
Cheng, Qiuming; Li, Qingmou
2002-05-01
Displaying the remotely sensed image with a proper color palette is the first task in any kind of image processing and pattern recognition in GIS and image processing environments. The purpose of displaying the image should be not only to provide a visual representation of the variance of the image, although this has been the primary objective of most conventional methods, but also the color palette should reflect real-world features on the ground which must be the primary objective of employing remotely sensed data. Although most conventional methods focus only on the first purpose of image representation, the concentration-area ( C- A plot) fractal method proposed in this paper aims to meet both purposes on the basis of pixel values and pixel value frequency distribution as well as spatial and geometrical properties of image patterns. The C- A method can be used to establish power-law relationships between the area A(⩾ s) with the pixel values greater than s and the pixel value s itself after plotting these values on log-log paper. A number of straight-line segments can be manually or automatically fitted to the points on the log-log paper, each representing a power-law relationship between the area A and the cutoff pixel value for s in a particular range. These straight-line segments can yield a group of cutoff values on the basis of which the image can be classified into discrete classes or zones. These zones usually correspond to the real-world features on the ground. A Windows program has been prepared in ActiveX format for implementing the C- A method and integrating it into other GIS and image processing systems. A case study of Landsat TM band 5 has been used to demonstrate the application of the method and the flexibility of the computer program.
High Resolution X-Ray Spectroscopy and Imaging of Supernova Remnant N132D
NASA Technical Reports Server (NTRS)
Behar, Ehud; Rasmussen, Andrew; Griffiths, R. Gareth; Dennerl, Konrad; Audard, Marc; Aschenbach, Bernd
2000-01-01
The observation of the supernova remnant N132D by the scientific instruments on board the XMM-Newton satellite is presented. The X-rays from N132D are dispersed into a detailed line-rich spectrum using the Reflection Grating Spectrometers. Spectral lines of C, N, O, Ne, Mg, Si, S, and Fe are identified. Images of the remnant, in narrow wavelength bands, produced by the European Photon Imaging Cameras reveal a complex spatial structure of the ionic distribution. While K - shell Fe seems to originate near the centre, all of the other ions are observed along the shell. An emission excess of O(6+) over O(7+) is detected on the northeastern edge of the remnant. This can be a sign of hot ionising conditions, or it can reflect a relatively cool region. Spectral fitting of the CCD spectrum suggests high temperatures in this region, but a detailed analysis of the atomic processes involved in producing the O(6+) spectral lines leads to the conclusion that the intensities of these lines alone cannot provide a conclusive distinction between the two scenarios.
The NST observation of a small loop eruption in He I D3 line on 2016 May 30
NASA Astrophysics Data System (ADS)
Kim, Yeon-Han; Xu, Yan; Bong, Su-Chan; Lim, Eunkyung; Yang, Heesu; Park, Young-Deuk; Yurchyshyn, Vasyl B.; Ahn, Kwangsu; Goode, Philip R.
2017-08-01
Since the He I D3 line has a unique response to a flare impact on the low solar atmosphere, it can be a powerful diagnostic tool for energy transport processes. In order to obtain comprehensive data sets for studying solar flare activities in D3 spectral line, we performed observations for several days using the 1.6m New Solar Telescope of Big Bear Solar Observatory (BBSO) in 2015 and 2016, equipped with the He I D3 filter, the photospheric broadband filter, and Near IR imaging spectrograph (NIRIS). On 2016 May 30, we observed a small loop eruption in He I D3 images associated with a B class brightening, which is occurred around 17:10 UT in a small active region, and dynamic variations of photospheric features in G-band images. Accordingly, the cause of the loop eruption can be magnetic reconnection driven by photospheric plasma motions. In this presentation, we will give the observation results and the interpretation.
A machine vision system for micro-EDM based on linux
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhao, Wansheng; Li, Gang; Li, Zhiyong; Zhang, Yong
2006-11-01
Due to the high precision and good surface quality that it can give, Electrical Discharge Machining (EDM) is potentially an important process for the fabrication of micro-tools and micro-components. However, a number of issues remain unsolved before micro-EDM becomes a reliable process with repeatable results. To deal with the difficulties in micro electrodes on-line fabrication and tool wear compensation, a micro-EDM machine vision system is developed with a Charge Coupled Device (CCD) camera, with an optical resolution of 1.61μm and an overall magnification of 113~729. Based on the Linux operating system, an image capturing program is developed with the V4L2 API, and an image processing program is exploited by using OpenCV. The contour of micro electrodes can be extracted by means of the Canny edge detector. Through the system calibration, the micro electrodes diameter can be measured on-line. Experiments have been carried out to prove its performance, and the reasons of measurement error are also analyzed.
Towards a Fail-Safe Air Force Culture: Creating a Resilient Future While Avoiding Past Mistakes
2011-02-16
for either preventing catastrophic failures or in the event they occur. The Air Force Safety process often uses the ― Swiss Cheese ‖ model to...evaluate accidents. The image of holes in the protective cheese layers (proactive and reactive measures) lining up in such a way as to allow an accident is... cheese . More importantly, however, a HRO‘s focus is on ―the process of the slices lining up as each moment where one hole aligns with another
Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness
NASA Astrophysics Data System (ADS)
Singh, Preetpal
Canada is home to thousands of freshwater lakes and rivers. Apart from being sources of infinite natural beauty, rivers and lakes are an important source of water, food and transportation. The northern hemisphere of Canada experiences extreme cold temperatures in the winter resulting in a freeze up of regional lakes and rivers. Frozen lakes and rivers tend to offer unique opportunities in terms of wildlife harvesting and winter transportation. Ice roads built on frozen rivers and lakes are vital supply lines for industrial operations in the remote north. Monitoring the ice freeze-up and break-up dates annually can help predict regional climatic changes. Lake ice impacts a variety of physical, ecological and economic processes. The construction and maintenance of a winter road can cost millions of dollars annually. A good understanding of ice mechanics is required to build and deem an ice road safe. A crucial factor in calculating load bearing capacity of ice sheets is the thickness of ice. Construction costs are mainly attributed to producing and maintaining a specific thickness and density of ice that can support different loads. Climate change is leading to warmer temperatures causing the ice to thin faster. At a certain point, a winter road may not be thick enough to support travel and transportation. There is considerable interest in monitoring winter road conditions given the high construction and maintenance costs involved. Remote sensing technologies such as Synthetic Aperture Radar have been successfully utilized to study the extent of ice covers and record freeze-up and break-up dates of ice on lakes and rivers across the north. Ice road builders often used Ultrasound equipment to measure ice thickness. However, an automated monitoring system, based on machine vision and image processing technology, which can measure ice thickness on lakes has not been thought of. Machine vision and image processing techniques have successfully been used in manufacturing to detect equipment failure and identify defective products at the assembly line. The research work in this thesis combines machine vision and image processing technology to build a digital imaging and processing system for monitoring and measuring lake ice thickness in real time. An ultra-compact USB camera is programmed to acquire and transmit high resolution imagery for processing with MATLAB Image Processing toolbox. The image acquisition and transmission process is fully automated; image analysis is semi-automated and requires limited user input. Potential design changes to the prototype and ideas on fully automating the imaging and processing procedure are presented to conclude this research work.
Automatic parquet block sorting using real-time spectral classification
NASA Astrophysics Data System (ADS)
Astrom, Anders; Astrand, Erik; Johansson, Magnus
1999-03-01
This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.
Relative Pose Estimation Using Image Feature Triplets
NASA Astrophysics Data System (ADS)
Chuang, T. Y.; Rottensteiner, F.; Heipke, C.
2015-03-01
A fully automated reconstruction of the trajectory of image sequences using point correspondences is turning into a routine practice. However, there are cases in which point features are hardly detectable, cannot be localized in a stable distribution, and consequently lead to an insufficient pose estimation. This paper presents a triplet-wise scheme for calibrated relative pose estimation from image point and line triplets, and investigates the effectiveness of the feature integration upon the relative pose estimation. To this end, we employ an existing point matching technique and propose a method for line triplet matching in which the relative poses are resolved during the matching procedure. The line matching method aims at establishing hypotheses about potential minimal line matches that can be used for determining the parameters of relative orientation (pose estimation) of two images with respect to the reference one; then, quantifying the agreement using the estimated orientation parameters. Rather than randomly choosing the line candidates in the matching process, we generate an associated lookup table to guide the selection of potential line matches. In addition, we integrate the homologous point and line triplets into a common adjustment procedure. In order to be able to also work with image sequences the adjustment is formulated in an incremental manner. The proposed scheme is evaluated with both synthetic and real datasets, demonstrating its satisfactory performance and revealing the effectiveness of image feature integration.
In-situ quality monitoring during laser brazing
NASA Astrophysics Data System (ADS)
Ungers, Michael; Fecker, Daniel; Frank, Sascha; Donst, Dmitri; Märgner, Volker; Abels, Peter; Kaierle, Stefan
Laser brazing of zinc coated steel is a widely established manufacturing process in the automotive sector, where high quality requirements must be fulfilled. The strength, impermeablitiy and surface appearance of the joint are particularly important for judging its quality. The development of an on-line quality control system is highly desired by the industry. This paper presents recent works on the development of such a system, which consists of two cameras operating in different spectral ranges. For the evaluation of the system, seam imperfections are created artificially during experiments. Finally image processing algorithms for monitoring process parameters based the captured images are presented.
Ethnomathematics elements in Batik Bali using backpropagation method
NASA Astrophysics Data System (ADS)
Lestari, Mei; Irawan, Ari; Rahayu, Wanti; Wayan Parwati, Ni
2018-05-01
Batik is one of traditional arts that has been established by the UNESCO as Indonesia’s cultural heritage. Batik has varieties and motifs, and each motifs has its own uniqueness but seems similar, that makes it difficult to identify. This study aims to develop an application that can identify typical batik Bali with etnomatematics elements on it. Etnomatematics is a study that shows relation between culture and mathematics concepts. Etnomatematics in Batik Bali is more to geometrical concept in line of strong Balinese culture element. The identification process is use backpropagation method. Steps of backpropagation methods are image processing (including scalling and tresholding image process). Next step is insert the processed image to an artificial neural network. This study resulted an accuracy of identification of batik Bali that has Etnomatematics elements on it.
Welding studs detection based on line structured light
NASA Astrophysics Data System (ADS)
Geng, Lei; Wang, Jia; Wang, Wen; Xiao, Zhitao
2018-01-01
The quality of welding studs is significant for installation and localization of components of car in the process of automobile general assembly. A welding stud detection method based on line structured light is proposed. Firstly, the adaptive threshold is designed to calculate the binary images. Then, the light stripes of the image are extracted after skeleton line extraction and morphological filtering. The direction vector of the main light stripe is calculated using the length of the light stripe. Finally, the gray projections along the orientation of the main light stripe and the vertical orientation of the main light stripe are computed to obtain curves of gray projection, which are used to detect the studs. Experimental results demonstrate that the error rate of proposed method is lower than 0.1%, which is applied for automobile manufacturing.
Wang, Bo; Bao, Jianwei; Wang, Shikui; Wang, Houjun; Sheng, Qinghong
2017-01-01
Remote sensing images could provide us with tremendous quantities of large-scale information. Noise artifacts (stripes), however, made the images inappropriate for vitalization and batch process. An effective restoration method would make images ready for further analysis. In this paper, a new method is proposed to correct the stripes and bad abnormal pixels in charge-coupled device (CCD) linear array images. The method involved a line tracing method, limiting the location of noise to a rectangular region, and corrected abnormal pixels with the Lagrange polynomial algorithm. The proposed detection and restoration method were applied to Gaofen-1 satellite (GF-1) images, and the performance of this method was evaluated by omission ratio and false detection ratio, which reached 0.6% and 0%, respectively. This method saved 55.9% of the time, compared with traditional method. PMID:28441754
Directional templates for real-time detection of coronal axis rotated faces
NASA Astrophysics Data System (ADS)
Perez, Claudio A.; Estevez, Pablo A.; Garate, Patricio
2004-10-01
Real-time face and iris detection on video images has gained renewed attention because of multiple possible applications in studying eye function, drowsiness detection, virtual keyboard interfaces, face recognition, video processing and multimedia retrieval. In this paper, a study is presented on using directional templates in the detection of faces rotated in the coronal axis. The templates are built by extracting the directional image information from the regions of the eyes, nose and mouth. The face position is determined by computing a line integral using the templates over the face directional image. The line integral reaches a maximum when it coincides with the face position. It is shown an improvement in localization selectivity by the increased value in the line integral computed with the directional template. Besides, improvements in the line integral value for face size and face rotation angle was also found through the computation of the line integral using the directional template. Based on these results the new templates should improve selectivity and hence provide the means to restrict computations to a fewer number of templates and restrict the region of search during the face and eye tracking procedure. The proposed method is real time, completely non invasive and was applied with no background limitation and normal illumination conditions in an indoor environment.
Computer analysis of gallbladder ultrasonic images towards recognition of pathological lesions
NASA Astrophysics Data System (ADS)
Ogiela, M. R.; Bodzioch, S.
2011-06-01
This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards automatic detection and interpretation of disease symptoms on processed US images. First, in this paper, there is presented a new heuristic method of filtering gallbladder contours from images. A major stage in this filtration is to segment and section off areas occupied by the said organ. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours, based on rank filtration, as well as on the analysis of line profile sections on tested organs. The second part concerns detecting the most important lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. The methodology of computer analysis of US gallbladder images presented here is clearly utilitarian in nature and after standardising can be used as a technique for supporting the diagnostics of selected gallbladder disorders using the images of this organ.
Optimum ArFi laser bandwidth for 10nm node logic imaging performance
NASA Astrophysics Data System (ADS)
Alagna, Paolo; Zurita, Omar; Timoshkov, Vadim; Wong, Patrick; Rechtsteiner, Gregory; Baselmans, Jan; Mailfert, Julien
2015-03-01
Lithography process window (PW) and CD uniformity (CDU) requirements are being challenged with scaling across all device types. Aggressive PW and yield specifications put tight requirements on scanner performance, especially on focus budgets resulting in complicated systems for focus control. In this study, an imec N10 Logic-type test vehicle was used to investigate the E95 bandwidth impact on six different Metal 1 Logic features. The imaging metrics that track the impact of light source E95 bandwidth on performance of hot spots are: process window (PW), line width roughness (LWR), and local critical dimension uniformity (LCDU). In the first section of this study, the impact of increasing E95 bandwidth was investigated to observe the lithographic process control response of the specified logic features. In the second section, a preliminary assessment of the impact of lower E95 bandwidth was performed. The impact of lower E95 bandwidth on local intensity variability was monitored through the CDU of line end features and the LWR power spectral density (PSD) of line/space patterns. The investigation found that the imec N10 test vehicle (with OPC optimized for standard E95 bandwidth of300fm) features exposed at 200fm showed pattern specific responses, suggesting areas of potential interest for further investigation.
Brouckaert, Davinia; De Meyer, Laurens; Vanbillemont, Brecht; Van Bockstal, Pieter-Jan; Lammens, Joris; Mortier, Séverine; Corver, Jos; Vervaet, Chris; Nopens, Ingmar; De Beer, Thomas
2018-04-03
Near-infrared chemical imaging (NIR-CI) is an emerging tool for process monitoring because it combines the chemical selectivity of vibrational spectroscopy with spatial information. Whereas traditional near-infrared spectroscopy is an attractive technique for water content determination and solid-state investigation of lyophilized products, chemical imaging opens up possibilities for assessing the homogeneity of these critical quality attributes (CQAs) throughout the entire product. In this contribution, we aim to evaluate NIR-CI as a process analytical technology (PAT) tool for at-line inspection of continuously freeze-dried pharmaceutical unit doses based on spin freezing. The chemical images of freeze-dried mannitol samples were resolved via multivariate curve resolution, allowing us to visualize the distribution of mannitol solid forms throughout the entire cake. Second, a mannitol-sucrose formulation was lyophilized with variable drying times for inducing changes in water content. Analyzing the corresponding chemical images via principal component analysis, vial-to-vial variations as well as within-vial inhomogeneity in water content could be detected. Furthermore, a partial least-squares regression model was constructed for quantifying the water content in each pixel of the chemical images. It was hence concluded that NIR-CI is inherently a most promising PAT tool for continuously monitoring freeze-dried samples. Although some practicalities are still to be solved, this analytical technique could be applied in-line for CQA evaluation and for detecting the drying end point.
NASA Technical Reports Server (NTRS)
Baily, N. A.
1975-01-01
A light amplifier for large flat screen fluoroscopy was investigated which will decrease both its size and weight. The work on organ contouring was extended to yield volumes. This is a simple extension since the fluoroscopic image contains density (gray scale) information which can be translated as tissue thickness, integrated, yielding accurate volume data in an on-line situation. A number of devices were developed for analog image processing of video signals, operating on-line in real time, and with simple selection mechanisms. The results show that this approach is feasible and produces are improvement in image quality which should make diagnostic error significantly lower. These are all low cost devices, small and light in weight, thereby making them usable in a space environment, on the Ames centrifuge, and in a typical clinical situation.
Parallel Monte Carlo Search for Hough Transform
NASA Astrophysics Data System (ADS)
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.
2017-10-01
We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.
A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.
2017-12-01
This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.
NASA Technical Reports Server (NTRS)
Hoffer, R. M. (Principal Investigator); Latty, R. S.; Dean, E.; Knowlton, D. J.
1980-01-01
Separate holograms of horizontally (HH) and vertically (HV) polarized responses obtained by the APQ-102 side-looking radar were processed through an optical correlator and the resulting image was recorded on positive film from which black and white negative and positive prints were made. Visual comparison of the HH and HV images reveals a distinct dark band in the imagery which covers about 30% of the radar strip. Preliminary evaluaton of the flight line 1 date indicates that various features on the HH and HV images seem to have different response levels. The amount of sidelap due to the look angle between flight lines 1 and 2 is negligible. NASA mission #425 to obtain flightlines of NS-001 MSS data and supporting aerial photography was successfully flown. Flight line 3 data are of very good quality and virtually cloud-free. Results of data analysis for selection of test fields and for evaluation of waveband combination and spatial resolution are presented.
Dynamic effects of restoring footpoint symmetry on closed magnetic field lines
NASA Astrophysics Data System (ADS)
Reistad, J. P.; Østgaard, N.; Tenfjord, P.; Laundal, K. M.; Snekvik, K.; Haaland, S.; Milan, S. E.; Oksavik, K.; Frey, H. U.; Grocott, A.
2016-05-01
Here we present an event where simultaneous global imaging of the aurora from both hemispheres reveals a large longitudinal shift of the nightside aurora of about 3 h, being the largest relative shift reported on from conjugate auroral imaging. This is interpreted as evidence of closed field lines having very asymmetric footpoints associated with the persistent positive y component of the interplanetary magnetic field before and during the event. At the same time, the Super Dual Auroral Radar Network observes the ionospheric nightside convection throat region in both hemispheres. The radar data indicate faster convection toward the dayside in the dusk cell in the Southern Hemisphere compared to its conjugate region. We interpret this as a signature of a process acting to restore symmetry of the displaced closed magnetic field lines resulting in flux tubes moving faster along the banana cell than the conjugate orange cell. The event is analyzed with emphasis on Birkeland currents (BC) associated with this restoring process, as recently described by Tenfjord et al. (2015). Using data from the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) during the same conditions as the presented event, the large-scale BC pattern associated with the event is presented. It shows the expected influence of the process of restoring symmetry on BCs. We therefore suggest that these observations should be recognized as being a result of the dynamic effects of restoring footpoint symmetry on closed field lines in the nightside.
A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer
NASA Astrophysics Data System (ADS)
Luckman, Adrian J.; Allinson, Nigel M.
1989-03-01
A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.
Guy, Kristy K.
2015-11-09
This Data Series Report includes open-ocean shorelines, back-island shorelines, back-island shoreline points, sand polygons, and sand lines for the undeveloped areas of New Jersey barrier islands. These data were extracted from orthoimagery (aerial photography) taken between March 9, 1991, and July 30, 2013. The images used were 0.3–1-meter (m)-resolution U.S. Geological Survey Digital Orthophoto Quarter Quads (DOQQ), U.S. Department of Agriculture National Agriculture Imagery Program (NAIP) images, National Oceanic and Atmospheric Administration images, and New Jersey Geographic Information Network images. The back-island shorelines were hand-digitized at the intersects of the apparent back-island shoreline and transects spaced at 20-m intervals. The open-ocean shorelines were hand-digitized at the approximate still-water level, such as tide level, which was fit through the average position of waves and swash apparent on the beach. Hand-digitizing was done at a scale of approximately 1:2,000. The sand polygons were derived by an image-processing unsupervised classification technique that separates images into classes. The classes were then visually categorized as either sand or not sand. Sand lines were taken from the sand polygons. Also included in this report are 20-m-spaced transect lines and the transect base lines.
High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform
Chan, Kenny K. H.; Tang, Shuo
2010-01-01
The useful imaging range in spectral domain optical coherence tomography (SD-OCT) is often limited by the depth dependent sensitivity fall-off. Processing SD-OCT data with the non-uniform fast Fourier transform (NFFT) can improve the sensitivity fall-off at maximum depth by greater than 5dB concurrently with a 30 fold decrease in processing time compared to the fast Fourier transform with cubic spline interpolation method. NFFT can also improve local signal to noise ratio (SNR) and reduce image artifacts introduced in post-processing. Combined with parallel processing, NFFT is shown to have the ability to process up to 90k A-lines per second. High-speed SD-OCT imaging is demonstrated at camera-limited 100 frames per second on an ex-vivo squid eye. PMID:21258551
NASA Astrophysics Data System (ADS)
Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping
2017-12-01
In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.
Incorporating the APS Catalog of the POSS I and Image Archive in ADS
NASA Technical Reports Server (NTRS)
Humphreys, Roberta M.
1998-01-01
The primary purpose of this contract was to develop the software to both create and access an on-line database of images from digital scans of the Palomar Sky Survey. This required modifying our DBMS (called Star Base) to create an image database from the actual raw pixel data from the scans. The digitized images are processed into a set of coordinate-reference index and pixel files that are stored in run-length files, thus achieving an efficient lossless compression. For efficiency and ease of referencing, each digitized POSS I plate is then divided into 900 subplates. Our custom DBMS maps each query into the corresponding POSS plate(s) and subplate(s). All images from the appropriate subplates are retrieved from disk with byte-offsets taken from the index files. These are assembled on-the-fly into a GIF image file for browser display, and a FITS format image file for retrieval. The FITS images have a pixel size of 0.33 arcseconds. The FITS header contains astrometric and photometric information. This method keeps the disk requirements manageable while allowing for future improvements. When complete, the APS Image Database will contain over 130 Gb of data. A set of web pages query forms are available on-line, as well as an on-line tutorial and documentation. The database is distributed to the Internet by a high-speed SGI server and a high-bandwidth disk system. URL is http://aps.umn.edu/IDB/. The image database software is written in perl and C and has been compiled on SGI computers with MIX5.3. A copy of the written documentation is included and the software is on the accompanying exabyte tape.
NASA Technical Reports Server (NTRS)
Szepesi, Z.
1978-01-01
The fabrication process and transfer characteristics for solid state radiographic image transducers (radiographic amplifier screens) are described. These screens are for use in realtime nondestructive evaluation procedures that require large format radiographic images with contrast and resolution capabilities unavailable with conventional fluoroscopic screens. The screens are suitable for in-motion, on-line radiographic inspection by means of closed circuit television. Experimental effort was made to improve image quality and response to low energy (5 kV and up) X-rays.
Counting neutrons with a commercial S-CMOS camera
NASA Astrophysics Data System (ADS)
Patrick, Van Esch; Paolo, Mutti; Emilio, Ruiz-Martinez; Estefania, Abad Garcia; Marita, Mosconi; Jon, Ortega
2018-01-01
It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable "neutron impact" data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera) but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has already been walked.
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Lorre, J. J.; Lynn, D. J.; Benton, W. D.
1976-01-01
Several techniques of a digital image-processing nature are illustrated which have proved useful in visual analysis of astronomical pictorial data. Processed digital scans of photographic plates of Stephans Quintet and NGC 4151 are used as examples to show how faint nebulosity is enhanced by high-pass filtering, how foreground stars are suppressed by linear interpolation, and how relative color differences between two images recorded on plates with different spectral sensitivities can be revealed by generating ratio images. Analyses are outlined which are intended to compensate partially for the blurring effects of the atmosphere on images of Stephans Quintet and to obtain more detailed information about Saturn's ring structure from low- and high-resolution scans of the planet and its ring system. The employment of a correlation picture to determine the tilt angle of an average spectral line in a low-quality spectrum is demonstrated for a section of the spectrum of Uranus.
NASA Astrophysics Data System (ADS)
Sugata, Keiichi; Osanai, Osamu; Kawada, Hiromitsu
2012-02-01
One of the major roles of the skin microcirculation is to supply oxygen and nutrition to the surrounding tissue. Regardless of the close relationship between the microcirculation and the surrounding tissue, there are few non-invasive methods that can evaluate both the microcirculation and its surrounding tissue at the same site. We visualized microcapillary plexus structures in human skin using in vivo reflectance confocal-laser-scanning microscopy (CLSM), Vivascope 3000® (Lucid Inc., USA) and Image J software (National Institutes of Health, USA) for video image processing. CLSM is a non-invasive technique that can visualize the internal structure of the skin at the cellular level. In addition to internal morphological information such as the extracellular matrix, our method reveals capillary structures up to the depth of the subpapillary plexus at the same site without the need for additional optical systems. Video images at specific depths of the inner forearm skin were recorded. By creating frame-to-frame difference images from the video images using off-line video image processing, we obtained images that emphasize the brightness depending on changes of intensity coming from the movement of blood cells. Merging images from different depths of the skin elucidates the 3-dimensional fine line-structure of the microcirculation. Overall our results show the feasibility of a non-invasive, high-resolution imaging technique to characterize the skin microcirculation and the surrounding tissue.
Coincident Extraction of Line Objects from Stereo Image Pairs.
1983-09-01
4.4.3 Reconstruction of intersections 4.5 Final result processing 5. Presentation of the results 5.1 FIM image processing system 5.2 Extraction results in...image. To achieve this goal, the existing software system had to be modified and extended considerably. The following sections of this report will give...8000 pixels of each image without explicit loading of subimages could not yet be performed due to computer system software problems. m m n m -4- The
Progressive sample processing of band selection for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Liu, Keng-Hao; Chien, Hung-Chang; Chen, Shih-Yu
2017-10-01
Band selection (BS) is one of the most important topics in hyperspectral image (HSI) processing. The objective of BS is to find a set of representative bands that can represent the whole image with lower inter-band redundancy. Many types of BS algorithms were proposed in the past. However, most of them can be carried on in an off-line manner. It means that they can only be implemented on the pre-collected data. Those off-line based methods are sometime useless for those applications that are timeliness, particular in disaster prevention and target detection. To tackle this issue, a new concept, called progressive sample processing (PSP), was proposed recently. The PSP is an "on-line" framework where the specific type of algorithm can process the currently collected data during the data transmission under band-interleavedby-sample/pixel (BIS/BIP) protocol. This paper proposes an online BS method that integrates a sparse-based BS into PSP framework, called PSP-BS. In PSP-BS, the BS can be carried out by updating BS result recursively pixel by pixel in the same way that a Kalman filter does for updating data information in a recursive fashion. The sparse regression is solved by orthogonal matching pursuit (OMP) algorithm, and the recursive equations of PSP-BS are derived by using matrix decomposition. The experiments conducted on a real hyperspectral image show that the PSP-BS can progressively output the BS status with very low computing time. The convergence of BS results during the transmission can be quickly achieved by using a rearranged pixel transmission sequence. This significant advantage allows BS to be implemented in a real time manner when the HSI data is transmitted pixel by pixel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nabavizadeh, Nima, E-mail: nabaviza@ohsu.edu; Elliott, David A.; Chen, Yiyi
Purpose: To survey image guided radiation therapy (IGRT) practice patterns, as well as IGRT's impact on clinical workflow and planning treatment volumes (PTVs). Methods and Materials: A sample of 5979 treatment site–specific surveys was e-mailed to the membership of the American Society for Radiation Oncology (ASTRO), with questions pertaining to IGRT modality/frequency, PTV expansions, method of image verification, and perceived utility/value of IGRT. On-line image verification was defined as images obtained and reviewed by the physician before treatment. Off-line image verification was defined as images obtained before treatment and then reviewed by the physician before the next treatment. Results: Of 601 evaluablemore » responses, 95% reported IGRT capabilities other than portal imaging. The majority (92%) used volumetric imaging (cone-beam CT [CBCT] or megavoltage CT), with volumetric imaging being the most commonly used modality for all sites except breast. The majority of respondents obtained daily CBCTs for head and neck intensity modulated radiation therapy (IMRT), lung 3-dimensional conformal radiation therapy or IMRT, anus or pelvis IMRT, prostate IMRT, and prostatic fossa IMRT. For all sites, on-line image verification was most frequently performed during the first few fractions only. No association was seen between IGRT frequency or CBCT utilization and clinical treatment volume to PTV expansions. Of the 208 academic radiation oncologists who reported working with residents, only 41% reported trainee involvement in IGRT verification processes. Conclusion: Consensus guidelines, further evidence-based approaches for PTV margin selection, and greater resident involvement are needed for standardized use of IGRT practices.« less
Nabavizadeh, Nima; Elliott, David A; Chen, Yiyi; Kusano, Aaron S; Mitin, Timur; Thomas, Charles R; Holland, John M
2016-03-15
To survey image guided radiation therapy (IGRT) practice patterns, as well as IGRT's impact on clinical workflow and planning treatment volumes (PTVs). A sample of 5979 treatment site-specific surveys was e-mailed to the membership of the American Society for Radiation Oncology (ASTRO), with questions pertaining to IGRT modality/frequency, PTV expansions, method of image verification, and perceived utility/value of IGRT. On-line image verification was defined as images obtained and reviewed by the physician before treatment. Off-line image verification was defined as images obtained before treatment and then reviewed by the physician before the next treatment. Of 601 evaluable responses, 95% reported IGRT capabilities other than portal imaging. The majority (92%) used volumetric imaging (cone-beam CT [CBCT] or megavoltage CT), with volumetric imaging being the most commonly used modality for all sites except breast. The majority of respondents obtained daily CBCTs for head and neck intensity modulated radiation therapy (IMRT), lung 3-dimensional conformal radiation therapy or IMRT, anus or pelvis IMRT, prostate IMRT, and prostatic fossa IMRT. For all sites, on-line image verification was most frequently performed during the first few fractions only. No association was seen between IGRT frequency or CBCT utilization and clinical treatment volume to PTV expansions. Of the 208 academic radiation oncologists who reported working with residents, only 41% reported trainee involvement in IGRT verification processes. Consensus guidelines, further evidence-based approaches for PTV margin selection, and greater resident involvement are needed for standardized use of IGRT practices. Copyright © 2016 Elsevier Inc. All rights reserved.
Visualization of permanent marks in progressive addition lenses by digital in-line holography
NASA Astrophysics Data System (ADS)
Perucho, Beatriz; Micó, Vicente
2013-04-01
A critical issue in the production of ophthalmic lenses is to guarantee the correct centering and alignment throughout the manufacturing and mounting processes. Aimed to that, progressive addition lenses (PALs) incorporate permanent marks at standardized locations at the lens. Those marks are engraved upon the surface and provide the model identification and addition power of the PAL, as well as to serve as locator marks to re-ink the removable marks again if necessary. Although the permanent marks should be visible by simple visual inspection, those marks are often faint and weak on new lenses providing low contrast, obscured by scratches on older lenses, and partially occluded and difficult to recognize on tinted or anti-reflection coated lenses. In this contribution, we present an extremely simple visualization system for permanent marks in PALs based on digital in-line holography. Light emitted by a superluminescent diode (SLD) is used to illuminate the PAL which is placed just before a digital (CCD) sensor. Thus, the CCD records an in-line hologram incoming from the diffracted wavefront provided by the PAL. As a result, it is possible to recover an in-focus image of the PAL inspected region by means of classical holographic tools applied in the digital domain. This numerical process involves digital recording of the in-line hologram, numerical back propagation to the PAL plane, and some digital processing to reduce noise and present a high quality final image. Preliminary experimental results are provided showing the applicability of the proposed method.
IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM
NASA Technical Reports Server (NTRS)
Martin, M. D.
1994-01-01
The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Chang, C F; Williams, R C; Grano, D A; Downing, K H; Glaeser, R M
1983-01-01
This study investigates the causes of the apparent differences between the optical diffraction pattern of a micrograph of a Tobacco Mosaic Virus (TMV) particle, the optical diffraction pattern of a ten-fold photographically averaged image, and the computed diffraction pattern of the original micrograph. Peak intensities along the layer lines in the transform of the averaged image appear to be quite unlike those in the diffraction pattern of the original micrograph, and the diffraction intensities for the averaged image extend to unexpectedly high resolution. A carefully controlled, quantitative comparison reveals, however, that the optical diffraction pattern of the original micrograph and that of the ten-fold averaged image are essentially equivalent. Using computer-based image processing, we discovered that the peak intensities on the 6th layer line have values very similar in magnitude to the neighboring noise, in contrast to what was expected from the optical diffraction pattern of the original micrograph. This discrepancy was resolved by recording a series of optical diffraction patterns when the original micrograph was immersed in oil. These patterns revealed the presence of a substantial phase grating effect, which exaggerated the peak intensities on the 6th layer line, causing an erroneous impression that the high resolution features possessed a good signal-to-noise ratio. This study thus reveals some pitfalls and misleading results that can be encountered when using optical diffraction patterns to evaluate image quality.
Digital methods of recording color television images on film tape
NASA Astrophysics Data System (ADS)
Krivitskaya, R. Y.; Semenov, V. M.
1985-04-01
Three methods are now available for recording color television images on film tape, directly or after appropriate finish of signal processing. Conventional recording of images from the screens of three kinescopes with synthetic crystal face plates is still most effective for high fidelity. This method was improved by digital preprocessing of brightness color-difference signal. Frame-by-frame storage of these signals in the memory in digital form is followed by gamma and aperture correction and electronic correction of crossover distortions in the color layers of the film with fixing in accordance with specific emulsion procedures. The newer method of recording color television images with line arrays of light-emitting diodes involves dichromic superposing mirrors and a movable scanning mirror. This method allows the use of standard movie cameras, simplifies interlacing-to-linewise conversion and the mechanical equipment, and lengthens exposure time while it shortens recording time. The latest image transform method requires an audio-video recorder, a memory disk, a digital computer, and a decoder. The 9-step procedure includes preprocessing the total color television signal with reduction of noise level and time errors, followed by frame frequency conversion and setting the number of lines. The total signal is then resolved into its brightness and color-difference components and phase errors and image blurring are also reduced. After extraction of R,G,B signals and colorimetric matching of TV camera and film tape, the simultaneous R,B, B signals are converted from interlacing to sequential triades of color-quotient frames with linewise scanning at triple frequency. Color-quotient signals are recorded with an electron beam on a smoothly moving black-and-white film tape under vacuum. While digital techniques improve the signal quality and simplify the control of processes, not requiring stabilization of circuits, image processing is still analog.
Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi
2014-02-01
We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.
NASA Astrophysics Data System (ADS)
Rahman, Mir Mustafizur
In collaboration with The City of Calgary 2011 Sustainability Direction and as part of the HEAT (Heat Energy Assessment Technologies) project, the focus of this research is to develop a semi/automated 'protocol' to post-process large volumes of high-resolution (H-res) airborne thermal infrared (TIR) imagery to enable accurate urban waste heat mapping. HEAT is a free GeoWeb service, designed to help Calgary residents improve their home energy efficiency by visualizing the amount and location of waste heat leaving their homes and communities, as easily as clicking on their house in Google Maps. HEAT metrics are derived from 43 flight lines of TABI-1800 (Thermal Airborne Broadband Imager) data acquired on May 13--14, 2012 at night (11:00 pm--5:00 am) over The City of Calgary, Alberta (˜825 km 2) at a 50 cm spatial resolution and 0.05°C thermal resolution. At present, the only way to generate a large area, high-spatial resolution TIR scene is to acquire separate airborne flight lines and mosaic them together. However, the ambient sensed temperature within, and between flight lines naturally changes during acquisition (due to varying atmospheric and local micro-climate conditions), resulting in mosaicked images with different temperatures for the same scene components (e.g. roads, buildings), and mosaic join-lines arbitrarily bisect many thousands of homes. In combination these effects result in reduced utility and classification accuracy including, poorly defined HEAT Metrics, inaccurate hotspot detection and raw imagery that are difficult to interpret. In an effort to minimize these effects, three new semi/automated post-processing algorithms (the protocol) are described, which are then used to generate a 43 flight line mosaic of TABI-1800 data from which accurate Calgary waste heat maps and HEAT metrics can be generated. These algorithms (presented as four peer-reviewed papers)---are: (a) Thermal Urban Road Normalization (TURN)---used to mitigate the microclimatic variability within a thermal flight line based on varying road temperatures; (b) Automated Polynomial Relative Radiometric Normalization (RRN)---which mitigates the between flight line radiometric variability; and (c) Object Based Mosaicking (OBM)---which minimizes the geometric distortion along the mosaic edge between each flight line. A modified Emissivity Modulation technique is also described to correct H-res TIR images for emissivity. This combined radiometric and geometric post-processing protocol (i) increases the visual agreement between TABI-1800 flight lines, (ii) improves radiometric agreement within/between flight lines, (iii) produces a visually seamless mosaic, (iv) improves hot-spot detection and landcover classification accuracy, and (v) provides accurate data for thermal-based HEAT energy models. Keywords: Thermal Infrared, Post-Processing, High Spatial Resolution, Airborne, Thermal Urban Road Normalization (TURN), Relative Radiometric Normalization (RRN), Object Based Mosaicking (OBM), TABI-1800, HEAT, and Automation.
Skeletonization of gray-scale images by gray weighted distance transform
NASA Astrophysics Data System (ADS)
Qian, Kai; Cao, Siqi; Bhattacharya, Prabir
1997-07-01
In pattern recognition, thinning algorithms are often a useful tool to represent a digital pattern by means of a skeletonized image, consisting of a set of one-pixel-width lines that highlight the significant features interest in applying thinning directly to gray-scale images, motivated by the desire of processing images characterized by meaningful information distributed over different levels of gray intensity. In this paper, a new algorithm is presented which can skeletonize both black-white and gray pictures. This algorithm is based on the gray distance transformation and can be used to process any non-well uniformly distributed gray-scale picture and can preserve the topology of original picture. This process includes a preliminary phase of investigation in the 'hollows' in the gray-scale image; these hollows are considered not as topological constrains for the skeleton structure depending on their statistically significant depth. This algorithm can also be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.
Real time automated inspection
Fant, Karl M.; Fundakowski, Richard A.; Levitt, Tod S.; Overland, John E.; Suresh, Bindinganavle R.; Ulrich, Franz W.
1985-01-01
A method and apparatus relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges are segmented out by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections.
Zhang, Da; Mihai, Georgeta; Barbaras, Larry G; Brook, Olga R; Palmer, Matthew R
2018-05-10
Water equivalent diameter (Dw) reflects patient's attenuation and is a sound descriptor of patient size, and is used to determine size-specific dose estimator from a CT examination. Calculating Dw from CT localizer radiographs makes it possible to utilize Dw before actual scans and minimizes truncation errors due to limited reconstructed fields of view. One obstacle preventing the user community from implementing this useful tool is the necessity to calibrate localizer pixel values so as to represent water equivalent attenuation. We report a practical method to ease this calibration process. Dw is calculated from water equivalent area (Aw) which is deduced from the average localizer pixel value (LPV) of the line(s) in the localizer radiograph that correspond(s) to the axial image. The calibration process is conducted to establish the relationship between Aw and LPV. Localizer and axial images were acquired from phantoms of different total attenuation. We developed a program that automates the geometrical association between axial images and localizer lines and manages the measurements of Dw and average pixel values. We tested the calibration method on three CT scanners: a GE CT750HD, a Siemens Definition AS, and a Toshiba Acquilion Prime80, for both posterior-anterior (PA) and lateral (LAT) localizer directions (for all CTs) and with different localizer filters (for the Toshiba CT). The computer program was able to correctly perform the geometrical association between corresponding axial images and localizer lines. Linear relationships between Aw and LPV were observed (with R 2 all greater than 0.998) on all tested conditions, regardless of the direction and image filters used on the localizer radiographs. When comparing LAT and PA directions with the same image filter and for the same scanner, the slope values were close (maximum difference of 0.02 mm), and the intercept values showed larger deviations (maximum difference of 2.8 mm). Water equivalent diameter estimation on phantoms and patients demonstrated high accuracy of the calibration: percentage difference between Dw from axial images and localizers was below 2%. With five clinical chest examinations and five abdominal-pelvic examinations of varying patient sizes, the maximum percentage difference was approximately 5%. Our study showed that Aw and LPV are highly correlated, providing enough evidence to allow for the Dw determination once the experimental calibration process is established. © 2018 American Association of Physicists in Medicine.
Automated processing for proton spectroscopic imaging using water reference deconvolution.
Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W
1994-06-01
Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.
Research on Airborne SAR Imaging Based on Esc Algorithm
NASA Astrophysics Data System (ADS)
Dong, X. T.; Yue, X. J.; Zhao, Y. H.; Han, C. M.
2017-09-01
Due to the ability of flexible, accurate, and fast obtaining abundant information, airborne SAR is significant in the field of Earth Observation and many other applications. Optimally the flight paths are straight lines, but in reality it is not the case since some portion of deviation from the ideal path is impossible to avoid. A small disturbance from the ideal line will have a major effect on the signal phase, dramatically deteriorating the quality of SAR images and data. Therefore, to get accurate echo information and radar images, it is essential to measure and compensate for nonlinear motion of antenna trajectories. By means of compensating each flying trajectory to its reference track, MOCO method corrects linear phase error and quadratic phase error caused by nonlinear antenna trajectories. Position and Orientation System (POS) data is applied to acquiring accuracy motion attitudes and spatial positions of antenna phase centre (APC). In this paper, extend chirp scaling algorithm (ECS) is used to deal with echo data of airborne SAR. An experiment is done using VV-Polarization raw data of C-band airborne SAR. The quality evaluations of compensated SAR images and uncompensated SAR images are done in the experiment. The former always performs better than the latter. After MOCO processing, azimuth ambiguity is declined, peak side lobe ratio (PSLR) effectively improves and the resolution of images is improved obviously. The result shows the validity and operability of the imaging process for airborne SAR.
Evaluation of a hyperspectral image database for demosaicking purposes
NASA Astrophysics Data System (ADS)
Larabi, Mohamed-Chaker; Süsstrunk, Sabine
2011-01-01
We present a study on the the applicability of hyperspectral images to evaluate color filter array (CFA) design and the performance of demosaicking algorithms. The aim is to simulate a typical digital still camera processing pipe-line and to compare two different scenarios: evaluate the performance of demosaicking algorithms applied to raw camera RGB values before color rendering to sRGB, and evaluate the performance of demosaicking algorithms applied on the final sRGB color rendered image. The second scenario is the most frequently used one in literature because CFA design and algorithms are usually tested on a set of existing images that are already rendered, such as the Kodak Photo CD set containing the well-known lighthouse image. We simulate the camera processing pipe-line with measured spectral sensitivity functions of a real camera. Modeling a Bayer CFA, we select three linear demosaicking techniques in order to perform the tests. The evaluation is done using CMSE, CPSNR, s-CIELAB and MSSIM metrics to compare demosaicking results. We find that the performance, and especially the difference between demosaicking algorithms, is indeed significant depending if the mosaicking/demosaicking is applied to camera raw values as opposed to already rendered sRGB images. We argue that evaluating the former gives a better indication how a CFA/demosaicking combination will work in practice, and that it is in the interest of the community to create a hyperspectral image dataset dedicated to that effect.
Chen, Qing; Xu, Pengfei; Liu, Wenzhong
2016-01-01
Computer vision as a fast, low-cost, noncontact, and online monitoring technology has been an important tool to inspect product quality, particularly on a large-scale assembly production line. However, the current industrial vision system is far from satisfactory in the intelligent perception of complex grain images, comprising a large number of local homogeneous fragmentations or patches without distinct foreground and background. We attempt to solve this problem based on the statistical modeling of spatial structures of grain images. We present a physical explanation in advance to indicate that the spatial structures of the complex grain images are subject to a representative Weibull distribution according to the theory of sequential fragmentation, which is well known in the continued comminution of ore grinding. To delineate the spatial structure of the grain image, we present a method of multiscale and omnidirectional Gaussian derivative filtering. Then, a product quality classifier based on sparse multikernel–least squares support vector machine is proposed to solve the low-confidence classification problem of imbalanced data distribution. The proposed method is applied on the assembly line of a food-processing enterprise to classify (or identify) automatically the production quality of rice. The experiments on the real application case, compared with the commonly used methods, illustrate the validity of our method. PMID:26986726
Flow measurements in sewers based on image analysis: automatic flow velocity algorithm.
Jeanbourquin, D; Sage, D; Nguyen, L; Schaeli, B; Kayal, S; Barry, D A; Rossi, L
2011-01-01
Discharges of combined sewer overflows (CSOs) and stormwater are recognized as an important source of environmental contamination. However, the harsh sewer environment and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. An in situ system for sewer water flow monitoring based on video images was evaluated. Algorithms to determine water velocities were developed based on image-processing techniques. The image-based water velocity algorithm identifies surface features and measures their positions with respect to real world coordinates. A web-based user interface and a three-tier system architecture enable remote configuration of the cameras and the image-processing algorithms in order to calculate automatically flow velocity on-line. Results of investigations conducted in a CSO are presented. The system was found to measure reliably water velocities, thereby providing the means to understand particular hydraulic behaviors.
On-line determination of pork color and intramuscular fat by computer vision
NASA Astrophysics Data System (ADS)
Liao, Yi-Tao; Fan, Yu-Xia; Wu, Xue-Qian; Xie, Li-juan; Cheng, Fang
2010-04-01
In this study, the application potential of computer vision in on-line determination of CIE L*a*b* and content of intramuscular fat (IMF) of pork was evaluated. Images of pork chop from 211 pig carcasses were captured while samples were on a conveyor belt at the speed of 0.25 m•s-1 to simulate the on-line environment. CIE L*a*b* and IMF content were measured with colorimeter and chemical extractor as reference. The KSW algorithm combined with region selection was employed in eliminating the surrounding fat of longissimus dorsi muscle (MLD). RGB values of the pork were counted and five methods were applied for transforming RGB values to CIE L*a*b* values. The region growing algorithm with multiple seed points was applied to mask out the IMF pixels within the intensity corrected images. The performances of the proposed algorithms were verified by comparing the measured reference values and the quality characteristics obtained by image processing. MLD region of six samples could not be identified using the KSW algorithm. Intensity nonuniformity of pork surface in the image can be eliminated efficiently, and IMF region of three corrected images failed to be extracted. Given considerable variety of color and complexity of the pork surface, CIE L*, a* and b* color of MLD could be predicted with correlation coefficients of 0.84, 0.54 and 0.47 respectively, and IMF content could be determined with a correlation coefficient more than 0.70. The study demonstrated that it is feasible to evaluate CIE L*a*b* values and IMF content on-line using computer vision.
Electro-optical imaging systems integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wight, R.
1987-01-01
Since the advent of high resolution, high data rate electronic sensors for military aircraft, the demands on their counterpart, the image generator hard copy output system, have increased dramatically. This has included support of direct overflight and standoff reconnaissance systems and often has required operation within a military shelter or van. The Tactical Laser Beam Recorder (TLBR) design has met the challenge each time. A third generation (TLBR) was designed and two units delivered to rapidly produce high quality wet process imagery on 5-inch film from a 5-sensor digital image signal input. A modular, in-line wet film processor is includedmore » in the total TLBR (W) system. The system features a rugged optical and transport package that requires virtually no alignment or maintenance. It has a ''Scan FIX'' capability which corrects for scanner fault errors and ''Scan LOC'' system which provides for complete phase synchronism isolation between scanner and digital image data input via strobed, 2-line digital buffers. Electronic gamma adjustment automatically compensates for variable film processing time as the film speed changes to track the sensor. This paper describes the fourth meeting of that challenge, the High Resolution Laser Beam Recorder (HRLBR) for Reconnaissance/Tactical applications.« less
Analysis of Variance in Statistical Image Processing
NASA Astrophysics Data System (ADS)
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
Ortiz-Ruiz, Alejandra; Postigo, María; Gil-Casanova, Sara; Cuadrado, Daniel; Bautista, José M; Rubio, José Miguel; Luengo-Oroz, Miguel; Linares, María
2018-01-30
Routine field diagnosis of malaria is a considerable challenge in rural and low resources endemic areas mainly due to lack of personnel, training and sample processing capacity. In addition, differential diagnosis of Plasmodium species has a high level of misdiagnosis. Real time remote microscopical diagnosis through on-line crowdsourcing platforms could be converted into an agile network to support diagnosis-based treatment and malaria control in low resources areas. This study explores whether accurate Plasmodium species identification-a critical step during the diagnosis protocol in order to choose the appropriate medication-is possible through the information provided by non-trained on-line volunteers. 88 volunteers have performed a series of questionnaires over 110 images to differentiate species (Plasmodium falciparum, Plasmodium ovale, Plasmodium vivax, Plasmodium malariae, Plasmodium knowlesi) and parasite staging from thin blood smear images digitalized with a smartphone camera adapted to the ocular of a conventional light microscope. Visual cues evaluated in the surveys include texture and colour, parasite shape and red blood size. On-line volunteers are able to discriminate Plasmodium species (P. falciparum, P. malariae, P. vivax, P. ovale, P. knowlesi) and stages in thin-blood smears according to visual cues observed on digitalized images of parasitized red blood cells. Friendly textual descriptions of the visual cues and specialized malaria terminology is key for volunteers learning and efficiency. On-line volunteers with short-training are able to differentiate malaria parasite species and parasite stages from digitalized thin smears based on simple visual cues (shape, size, texture and colour). While the accuracy of a single on-line expert is far from perfect, a single parasite classification obtained by combining the opinions of multiple on-line volunteers over the same smear, could improve accuracy and reliability of Plasmodium species identification in remote malaria diagnosis.
The Onion Sign in neovascular age-related macular degeneration represents cholesterol crystals
Pang, Claudine E.; Messinger, Jeffrey D.; Zanzottera, Emma C.; Freund, K. Bailey; Curcio, Christine A.
2015-01-01
Purpose To investigate the frequency, natural evolution and histological correlates of layered, hyperreflective, sub-retinal pigment epithelium (sub-RPE) lines, known as the Onion Sign, in neovascular age-related macular degeneration (nvAMD). Design Retrospective observational cohort study; an experimental laboratory study. Participants Two hundred thirty eyes of 150 consecutive patients with nvAMD; 40 human donor eyes with clinical and histopathologic diagnosis of nvAMD. Methods Spectral-domain optical coherence tomography (SD-OCT), near-infrared reflectance (nIR), color fundus images and medical charts were reviewed. Donor eyes underwent multimodal ex vivo imaging including SD-OCT before processing for high-resolution histology. Main Outcome Measures Presence of layered, hyperreflective sub-RPE lines, qualitative analysis of their change in appearance over time with SD-OCT, histological correlates of these lines, and associated findings within surrounding tissues. Results Sixteen of 230 eyes of patients (7.0%) and 2 of 40 donor eyes (5.0%) with nvAMD had layered, hyperreflective sub-RPE lines on SD-OCT imaging. These appeared as refractile, yellow-gray exudates on color imaging and hyperreflective lesions on nIR. In all 16 eyes, the Onion Sign persisted in follow-up for up to 5 years, with fluctuations in the abundance of lines and associated with intraretinal hyperreflective foci. Patients with the Onion Sign were disproportionately taking cholesterol-lowering medications (p = 0.025). Histology of 2 donor eyes revealed that hyperreflective lines correlated with clefts created by extraction of cholesterol crystals during tissue processing. Fluid surrounding crystals contained lipid yet was distinct from oily drusen. Intraretinal hyperreflective foci correlated with intraretinal RPE and lipid-filled cells of probable monocyte origin. Conclusion Persistent and dynamic, the Onion Sign represents sub-RPE cholesterol crystal precipitation in aqueous environment. The frequency of the Onion Sign in nvAMD in a referral practice and a pathology archive is 5–7%. Associations include use of cholesterol-lowering medication and intraretinal hyperreflective foci attributable to RPE cells and lipid-filled cells of monocyte origin. PMID:26298717
Laser marking as a result of applying reverse engineering
NASA Astrophysics Data System (ADS)
Mihalache, Andrei; Nagîţ, Gheorghe; Rîpanu, Marius Ionuţ; Slǎtineanu, Laurenţiu; Dodun, Oana; Coteaţǎ, Margareta
2018-05-01
The elaboration of a modern manufacturing technology needs a certain quantum of information concerning the part to be obtained. When it is necessary to elaborate the technology for an existing object, such an information could be ensured by using the principles specific to the reverse engineering. Essentially, in the case of this method, the analysis of the surfaces and of other characteristics of the part must offer enough information for the elaboration of the part manufacturing technology. On the other hand, it is known that the laser marking is a processing method able to ensure the transfer of various inscriptions or drawings on a part. Sometimes, the laser marking could be based on the analysis of an existing object, whose image could be used to generate the same object or an improved object. There are many groups of factors able to affect the results of applying the laser marking process. A theoretical analysis was proposed to show that the heights of triangles obtained by means of a CNC marking equipment depend on the width of the line generated by the laser spot on the workpiece surface. An experimental research was thought and materialized to highlight the influence exerted by the line with and the angle of lines intersections on the accuracy of the marking process. By mathematical processing of the experimental results, empirical mathematical models were determined. The power type model and the graphical representation elaborated on the base of this model offered an image concerning the influences exerted by the considered input factors on the marking process accuracy.
Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel
2012-01-01
Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023
Use of scatterometry for resist process control
NASA Astrophysics Data System (ADS)
Bishop, Kenneth P.; Milner, Lisa-Michelle; Naqvi, S. Sohail H.; McNeil, John R.; Draper, B. L.
1992-06-01
The formation of resist lines having submicron critical dimensions (CDs) is a complex multistep process, requiring precise control of each processing step. Optimization of parameters for each processing step may be accomplished through theoretical modeling techniques and/or the use of send-ahead wafers followed by scanning electron microscope measurements. Once the optimum parameters for any process having been selected, (e.g., time duration and temperature for post-exposure bake process), no in-situ CD measurements are made. In this paper we describe the use of scatterometry to provide this essential metrology capability. It involves focusing a laser beam on a periodic grating and predicting the shape of the grating lines from a measurement of the scattered power in the diffraction orders. The inverse prediction of lineshape from a measurement of the scatter power is based on a vector diffraction analysis used in conjunction with photolithography simulation tools to provide an accurate scatter model for latent image gratings. This diffraction technique has previously been applied to looking at latent image grating formation, as exposure is taking place. We have broadened the scope of the application and consider the problem of determination of optimal focus.
NASA Astrophysics Data System (ADS)
Yao, Xuan; Wang, Yuanbo; Ravanfar, Mohammadreza; Pfeiffer, Ferris M.; Duan, Dongsheng; Yao, Gang
2016-11-01
Collagen fiber orientation plays an important role in determining the structure and function of the articular cartilage. However, there is currently a lack of nondestructive means to image the fiber orientation from the cartilage surface. The purpose of this study is to investigate whether the newly developed optical polarization tractography (OPT) can image fiber structure in articular cartilage. OPT was applied to obtain the depth-dependent fiber orientation in fresh articular cartilage samples obtained from porcine phalanges. For comparison, we also obtained collagen fiber orientation in the superficial zone of the cartilage using the established split-line method. The direction of each split-line was quantified using image processing. The orientation measured in OPT agreed well with those obtained from the split-line method. The correlation analysis of a total of 112 split-lines showed a greater than 0.9 coefficient of determination (R2) between the split-line results and OPT measurements obtained between 40 and 108 μm in depth. In addition, the thickness of the superficial layer can also be assessed from the birefringence images obtained in OPT. These results support that OPT provides a nondestructive way to image the collagen fiber structure in articular cartilage. This technology may be valuable for both basic cartilage research and clinical orthopedic applications.
A survey of camera error sources in machine vision systems
NASA Astrophysics Data System (ADS)
Jatko, W. B.
In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.
Experiments with recursive estimation in astronomical image processing
NASA Technical Reports Server (NTRS)
Busko, I.
1992-01-01
Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.
Structural Information Detection Based Filter for GF-3 SAR Images
NASA Astrophysics Data System (ADS)
Sun, Z.; Song, Y.
2018-04-01
GF-3 satellite with high resolution, large swath, multi-imaging mode, long service life and other characteristics, can achieve allweather and all day monitoring for global land and ocean. It has become the highest resolution satellite system in the world with the C-band multi-polarized synthetic aperture radar (SAR) satellite. However, due to the coherent imaging system, speckle appears in GF-3 SAR images, and it hinders the understanding and interpretation of images seriously. Therefore, the processing of SAR images has big challenges owing to the appearance of speckle. The high-resolution SAR images produced by the GF-3 satellite are rich in information and have obvious feature structures such as points, edges, lines and so on. The traditional filters such as Lee filter and Gamma MAP filter are not appropriate for the GF-3 SAR images since they ignore the structural information of images. In this paper, the structural information detection based filter is constructed, successively including the point target detection in the smallest window, the adaptive windowing method based on regional characteristics, and the most homogeneous sub-window selection. The despeckling experiments on GF-3 SAR images demonstrate that compared with the traditional filters, the proposed structural information detection based filter can well preserve the points, edges and lines as well as smooth the speckle more sufficiently.
ARTIP: Automated Radio Telescope Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh
2018-02-01
The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.
An embedded barcode for "connected" malaria rapid diagnostic tests.
Scherr, Thomas F; Gupta, Sparsh; Wright, David W; Haselton, Frederick R
2017-03-29
Many countries are shifting their efforts from malaria control to disease elimination. New technologies will be necessary to meet the more stringent demands of elimination campaigns, including improved quality control of malaria diagnostic tests, as well as an improved means for communicating test results among field healthcare workers, test manufacturers, and national ministries of health. In this report, we describe and evaluate an embedded barcode within standard rapid diagnostic tests as one potential solution. This information-augmented diagnostic test operates on the familiar principles of traditional lateral flow assays and simply replaces the control line with a control grid patterned in the shape of a QR (quick response) code. After the test is processed, the QR code appears on both positive or negative tests. In this report we demonstrate how this multipurpose code can be used not only to fulfill the control line role of test validation, but also to embed test manufacturing details, serve as a trigger for image capture, enable registration for image analysis, and correct for lighting effects. An accompanying mobile phone application automatically captures an image of the test when the QR code is recognized, decodes the QR code, performs image processing to determine the concentration of the malarial biomarker histidine-rich protein 2 at the test line, and transmits the test results and QR code payload to a secure web portal. This approach blends automated, sub-nanomolar biomarker detection, with near real-time reporting to provide quality assurance data that will help to achieve malaria elimination.
Curvatures Estimation in Orientation Selection
1991-01-31
processes are run at the same scale ). Not only is the L/L edge operator as accurate, it makes explicit a great deal of information which is either...Figure 11: An artificial image used to test the image operators. This is an anti-alia sed grey- scale image of lines and curves, which represent the...MacKay, "Influence of luminance gra- dient reversal on simple cells in feline striate cortex," J. Physiology (London), vol. 337, pp. 69--87, 1983
Celi, Simona; Berti, Sergio
2014-10-01
Optical coherence tomography (OCT) is a catheter-based medical imaging technique that produces cross-sectional images of blood vessels. This technique is particularly useful for studying coronary atherosclerosis. In this paper, we present a new framework that allows a segmentation and quantification of OCT images of coronary arteries to define the plaque type and stenosis grading. These analyses are usually carried out on-line on the OCT-workstation where measuring is mainly operator-dependent and mouse-based. The aim of this program is to simplify and improve the processing of OCT images for morphometric investigations and to present a fast procedure to obtain 3D geometrical models that can also be used for external purposes such as for finite element simulations. The main phases of our toolbox are the lumen segmentation and the identification of the main tissues in the artery wall. We validated the proposed method with identification and segmentation manually performed by expert OCT readers. The method was evaluated on ten datasets from clinical routine and the validation was performed on 210 images randomly extracted from the pullbacks. Our results show that automated segmentation of the vessel and of the tissue components are possible off-line with a precision that is comparable to manual segmentation for the tissue component and to the proprietary-OCT-console for the lumen segmentation. Several OCT sections have been processed to provide clinical outcome. Copyright © 2014 Elsevier B.V. All rights reserved.
Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU
Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.
2013-01-01
List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015
Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M
2015-10-01
New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Estimation of bladder wall location in ultrasound images.
Topper, A K; Jernigan, M E
1991-05-01
A method of automatically estimating the location of the bladder wall in ultrasound images is proposed. Obtaining this estimate is intended to be the first stage in the development of an automatic bladder volume calculation system. The first step in the bladder wall estimation scheme involves globally processing the images using standard image processing techniques to highlight the bladder wall. Separate processing sequences are required to highlight the anterior bladder wall and the posterior bladder wall. The sequence to highlight the anterior bladder wall involves Gaussian smoothing and second differencing followed by zero-crossing detection. Median filtering followed by thresholding and gradient detection is used to highlight as much of the rest of the bladder wall as was visible in the original images. Then a 'bladder wall follower'--a line follower with rules based on the characteristics of ultrasound imaging and the anatomy involved--is applied to the processed images to estimate the bladder wall location by following the portions of the bladder wall which are highlighted and filling in the missing segments. The results achieved using this scheme are presented.
The imaging node for the Planetary Data System
Eliason, E.M.; LaVoie, S.K.; Soderblom, L.A.
1996-01-01
The Planetary Data System Imaging Node maintains and distributes the archives of planetary image data acquired from NASA's flight projects with the primary goal of enabling the science community to perform image processing and analysis on the data. The Node provides direct and easy access to the digital image archives through wide distribution of the data on CD-ROM media and on-line remote-access tools by way of Internet services. The Node provides digital image processing tools and the expertise and guidance necessary to understand the image collections. The data collections, now approaching one terabyte in volume, provide a foundation for remote sensing studies for virtually all the planetary systems in our solar system (except for Pluto). The Node is responsible for restoring data sets from past missions in danger of being lost. The Node works with active flight projects to assist in the creation of their archive products and to ensure that their products and data catalogs become an integral part of the Node's data collections.
Real-Time Intravascular Ultrasound and Photoacoustic Imaging
VanderLaan, Donald; Karpiouk, Andrei; Yeager, Doug; Emelianov, Stanislav
2018-01-01
Combined intravascular ultrasound and photoacoustic imaging (IVUS/IVPA) is an emerging hybrid modality being explored as a means of improving the characterization of atherosclerotic plaque anatomical and compositional features. While initial demonstrations of the technique have been encouraging, they have been limited by catheter rotation and data acquisition, displaying and processing rates on the order of several seconds per frame as well as the use of off-line image processing. Herein, we present a complete IVUS/IVPA imaging system and method capable of real-time IVUS/IVPA imaging, with online data acquisition, image processing and display of both IVUS and IVPA images. The integrated IVUS/IVPA catheter is fully contained within a 1 mm outer diameter torque cable coupled on the proximal end to a custom-designed spindle enabling optical and electrical coupling to system hardware, including a nanosecond-pulsed laser with a controllable pulse repetition frequency capable of greater than 10kHz, motor and servo drive, an ultrasound pulser/receiver, and a 200 MHz digitizer. The system performance is characterized and demonstrated on a vessel-mimicking phantom with an embedded coronary stent intended to provide IVPA contrast within content of an IVUS image. PMID:28092507
Digital sun sensor multi-spot operation.
Rufino, Giancarlo; Grassi, Michele
2012-11-28
The operation and test of a multi-spot digital sun sensor for precise sun-line determination is described. The image forming system consists of an opaque mask with multiple pinhole apertures producing multiple, simultaneous, spot-like images of the sun on the focal plane. The sun-line precision can be improved by averaging multiple simultaneous measures. Nevertheless, the sensor operation on a wide field of view requires acquiring and processing images in which the number of sun spots and the related intensity level are largely variable. To this end, a reliable and robust image acquisition procedure based on a variable shutter time has been considered as well as a calibration function exploiting also the knowledge of the sun-spot array size. Main focus of the present paper is the experimental validation of the wide field of view operation of the sensor by using a sensor prototype and a laboratory test facility. Results demonstrate that it is possible to keep high measurement precision also for large off-boresight angles.
Ringkob, T P; Swartz, D R; Greaser, M L
2004-05-01
Image analysis procedures for immunofluorescence microscopy were developed to measure muscle thin filament lengths of beef, rabbit, and chicken myofibrils. Strips of beef cutaneous trunci, rectus abdominis, psoas, and masseter; chicken pectoralis; and rabbit psoas muscles were excised 5 to 30 min postmortem. Fluorescein phalloidin and rhodamine myosin subfragment-1 (S1) were used to probe the myofibril structure. Digital images were recorded with a cooled charge-coupled device controlled with IPLab Spectrum software (Signal Analytics Corp.) on a Macintosh operating system. The camera was attached to an inverted microscope, using both the phase-contrast and fluorescence illumination modes. Unfixed myofibrils incubated with fluorescein phalloidin showed fluorescence primarily at the Z-line and the tips of the thin filaments in the overlap region. Images were processed using IPLab and the National Institutes of Health's Image software. A region of interest was selected and scaled by a factor of 18.18, which enlarged the image from 11 pixels/microm to approximately 200 pixels/microm. An X-Y plot was exported to Spectrum 1.1 (Academic Software Development Group), where the signal was processed with a second derivative routine, so a cursor function could be used to measure length. Fixation before phalloidin incubation resulted in greatest intensity at the Z lines but a more-uniform staining over the remainder of the thin filament zone. High-resolution image capture and processing showed that thin filament lengths were significantly different (P < 0.01) among beef, rabbit, and chicken, with lengths of 1.28 to 1.32 microm, 1.16 microm, and 1.05 microm, respectively. Measurements using the S1 signal confirmed the phalloidin results. Fluorescent probes may be useful to study sarcomere structure and help explain species and muscle differences in meat texture.
Online aptitude automatic surface quality inspection system for hot rolled strips steel
NASA Astrophysics Data System (ADS)
Lin, Jin; Xie, Zhi-jiang; Wang, Xue; Sun, Nan-Nan
2005-12-01
Defects on the surface of hot rolled steel strips are main factors to evaluate quality of steel strips, an improved image recognition algorithm are used to extract the feature of Defects on the surface of steel strips. Base on the Machine vision and Artificial Neural Networks, establish a defect recognition method to select defect on the surface of steel strips. Base on these research. A surface inspection system and advanced algorithms for image processing to hot rolled strips is developed. Preparing two different fashion to lighting, adopting line blast vidicon of CCD on the surface steel strips on-line. Opening up capacity-diagnose-system with level the surface of steel strips on line, toward the above and undersurface of steel strips with ferric oxide, injure, stamp etc of defects on the surface to analyze and estimate. Miscarriage of justice and alternate of justice rate not preponderate over 5%.Geting hold of applications on some big enterprises of steel at home. Experiment proved that this measure is feasible and effective.
Imaging Ultrasound Guidance and on-line Estimation of Thermal Behavior in HIFU Exposed Targets
NASA Astrophysics Data System (ADS)
Chauhan, Sunita; Haryanto, Amir
2006-05-01
Elevated temperatures have been used for many years to combat several diseases including treatment of certain types of cancers/tumors. High Intensity Focused Ultrasound (HIFU) has emerged as a potential non-invasive modality for trackless targeting of deep-seated cancers of human body. For the procedures which require thermal elevation such as hyperthermia and tissue ablation, temperature becomes a parameter of vital importance in order to monitor the treatment on-line. Also, embedding invasive temperature probes for this purpose beats the supremacy of the non-invasive ablating modality. In this paper, we describe the use of a non-invasive and inexpensive conventional imaging ultrasound modality for lesion positioning and estimation of thermal behavior of the tissue on exposure to HIFU. Representative results of our online lesion tracking algorithm for discerning lesioning behavior using image capture, processing and phase-shift measurements are presented.
AIRSAR Web-Based Data Processing
NASA Technical Reports Server (NTRS)
Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne
2007-01-01
The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.
1983-10-19
knowledge -based symbolic reasoning, it nonetheless remains de- pendent on the lower levels of iconic processing for its raw information . Both sorts of...priori knowledge of where any particular line might go, and therefore no information regarding the extent of memory access required for the local...IC FILE COPY ,. c 4/t/7 ISG Report 104 IMAGE UNDERSTANDING RESEARCH Final Technical Report Covering Research Activity During the Period October 1
Optimization of super-resolution processing using incomplete image sets in PET imaging.
Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2008-12-01
Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of the point source study showed that the three SR images exhibited similar signal amplitudes and FWHM. The NEMA/IEC study showed that the average difference in SNR among the three SR images was 2.1% with respect to one another and they contained similar noise structure. ISR-1 and ISR-2 can be used to replace CSR, thereby reducing the total SR processing time and memory storage while maintaining similar contrast, resolution, SNR, and noise structure.
Airborne multidimensional integrated remote sensing system
NASA Astrophysics Data System (ADS)
Xu, Weiming; Wang, Jianyu; Shu, Rong; He, Zhiping; Ma, Yanhua
2006-12-01
In this paper, we present a kind of airborne multidimensional integrated remote sensing system that consists of an imaging spectrometer, a three-line scanner, a laser ranger, a position & orientation subsystem and a stabilizer PAV30. The imaging spectrometer is composed of two sets of identical push-broom high spectral imager with a field of view of 22°, which provides a field of view of 42°. The spectral range of the imaging spectrometer is from 420nm to 900nm, and its spectral resolution is 5nm. The three-line scanner is composed of two pieces of panchromatic CCD and a RGB CCD with 20° stereo angle and 10cm GSD(Ground Sample Distance) with 1000m flying height. The laser ranger can provide height data of three points every other four scanning lines of the spectral imager and those three points are calibrated to match the corresponding pixels of the spectral imager. The post-processing attitude accuracy of POS/AV 510 used as the position & orientation subsystem, which is the aerial special exterior parameters measuring product of Canadian Applanix Corporation, is 0.005° combined with base station data. The airborne multidimensional integrated remote sensing system was implemented successfully, performed the first flying experiment on April, 2005, and obtained satisfying data.
Prostate seed implant quality assessment using MR and CT image fusion.
Amdur, R J; Gladstone, D; Leopold, K A; Harris, R D
1999-01-01
After a seed implant of the prostate, computerized tomography (CT) is ideal for determining seed distribution but soft tissue anatomy is frequently not well visualized. Magnetic resonance (MR) images soft tissue anatomy well but seed visualization is problematic. We describe a method of fusing CT and MR images to exploit the advantages of both of these modalities when assessing the quality of a prostate seed implant. Eleven consecutive prostate seed implant patients were imaged with axial MR and CT scans. MR and CT images were fused in three dimensions using the Pinnacle 3.0 version of the ADAC treatment planning system. The urethra and bladder base were used to "line up" MR and CT image sets during image fusion. Alignment was accomplished using translation and rotation in the three ortho-normal planes. Accuracy of image fusion was evaluated by calculating the maximum deviation in millimeters between the center of the urethra on axial MR versus CT images. Implant quality was determined by comparing dosimetric results to previously set parameters. Image fusion was performed with a high degree of accuracy. When lining up the urethra and base of bladder, the maximum difference in axial position of the urethra between MR and CT averaged 2.5 mm (range 1.3-4.0 mm, SD 0.9 mm). By projecting CT-derived dose distributions over MR images of soft tissue structures, qualitative and quantitative evaluation of implant quality is straightforward. The image-fusion process we describe provides a sophisticated way of assessing the quality of a prostate seed implant. Commercial software makes the process time-efficient and available to any clinical practice with a high-quality treatment planning system. While we use MR to image soft tissue structures, the process could be used with any imaging modality that is able to visualize the prostatic urethra (e.g., ultrasound).
Intelligent form removal with character stroke preservation
NASA Astrophysics Data System (ADS)
Garris, Michael D.
1996-03-01
A new technique for intelligent form removal has been developed along with a new method for evaluating its impact on optical character recognition (OCR). All the dominant lines in the image are automatically detected using the Hough line transform and intelligently erased while simultaneously preserving overlapping character strokes by computing line width statistics and keying off of certain visual cues. This new method of form removal operates on loosely defined zones with no image deskewing. Any field in which the writer is provided a horizontal line to enter a response can be processed by this method. Several examples of processed fields are provided, including a comparison of results between the new method and a commercially available forms removal package. Even if this new form removal method did not improve character recognition accuracy, it is still a significant improvement to the technology because the requirement of a priori knowledge of the form's geometric details has been greatly reduced. This relaxes the recognition system's dependence on rigid form design, printing, and reproduction by automatically detecting and removing some of the physical structures (lines) on the form. Using the National Institute of Standards and Technology (NIST) public domain form-based handprint recognition system, the technique was tested on a large number of fields containing randomly ordered handprinted lowercase alphabets, as these letters (especially those with descenders) frequently touch and extend through the line along which they are written. Preserving character strokes improves overall lowercase recognition performance by 3%, which is a net improvement, but a single performance number like this doesn't communicate how the recognition process was really influenced. There is expected to be trade- offs with the introduction of any new technique into a complex recognition system. To understand both the improvements and the trade-offs, a new analysis was designed to compare the statistical distributions of individual confusion pairs between two systems. As OCR technology continues to improve, sophisticated analyses like this are necessary to reduce the errors remaining in complex recognition problems.
Real time automated inspection
Fant, K.M.; Fundakowski, R.A.; Levitt, T.S.; Overland, J.E.; Suresh, B.R.; Ulrich, F.W.
1985-05-21
A method and apparatus are described relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections. 43 figs.
Klein, Thomas; Wieser, Wolfgang; Reznicek, Lukas; Neubauer, Aljoscha; Kampik, Anselm; Huber, Robert
2013-01-01
We analyze the benefits and problems of in vivo optical coherence tomography (OCT) imaging of the human retina at A-scan rates in excess of 1 MHz, using a 1050 nm Fourier-domain mode-locked (FDML) laser. Different scanning strategies enabled by MHz OCT line rates are investigated, and a simple multi-volume data processing approach is presented. In-vivo OCT of the human ocular fundus is performed at different axial scan rates of up to 6.7 MHz. High quality non-mydriatic retinal imaging over an ultra-wide field is achieved by a combination of several key improvements compared to previous setups. For the FDML laser, long coherence lengths and 72 nm wavelength tuning range are achieved using a chirped fiber Bragg grating in a laser cavity at 419.1 kHz fundamental tuning rate. Very large data sets can be acquired with sustained data transfer from the data acquisition card to host computer memory, enabling high-quality averaging of many frames and of multiple aligned data sets. Three imaging modes are investigated: Alignment and averaging of 24 data sets at 1.68 MHz axial line rate, ultra-dense transverse sampling at 3.35 MHz line rate, and dual-beam imaging with two laser spots on the retina at an effective line rate of 6.7 MHz.
Klein, Thomas; Wieser, Wolfgang; Reznicek, Lukas; Neubauer, Aljoscha; Kampik, Anselm; Huber, Robert
2013-01-01
We analyze the benefits and problems of in vivo optical coherence tomography (OCT) imaging of the human retina at A-scan rates in excess of 1 MHz, using a 1050 nm Fourier-domain mode-locked (FDML) laser. Different scanning strategies enabled by MHz OCT line rates are investigated, and a simple multi-volume data processing approach is presented. In-vivo OCT of the human ocular fundus is performed at different axial scan rates of up to 6.7 MHz. High quality non-mydriatic retinal imaging over an ultra-wide field is achieved by a combination of several key improvements compared to previous setups. For the FDML laser, long coherence lengths and 72 nm wavelength tuning range are achieved using a chirped fiber Bragg grating in a laser cavity at 419.1 kHz fundamental tuning rate. Very large data sets can be acquired with sustained data transfer from the data acquisition card to host computer memory, enabling high-quality averaging of many frames and of multiple aligned data sets. Three imaging modes are investigated: Alignment and averaging of 24 data sets at 1.68 MHz axial line rate, ultra-dense transverse sampling at 3.35 MHz line rate, and dual-beam imaging with two laser spots on the retina at an effective line rate of 6.7 MHz. PMID:24156052
NASA Astrophysics Data System (ADS)
Huke, Philipp; Tal-Or, Lev; Sarmiento, Luis Fernando; Reiners, Ansgar
2016-07-01
Hollow cathode discharge lamps (HCLs) have been successfully used in recent years as calibration sources of optical astronomical spectrographs. The numerous narrow metal lines have stable wavelengths, which makes them well suited for m/s calibration accuracy of high-resolution spectrographs, while the buffer-gas lines are less stable and less useful. Accordingly, an important property is the metal-to-gas line-strength ratio (Rmetal/gas). Processes inside the lamp cause the light to be emitted from different regions between the cathode and the anode leaing to the emission of different beams with different values of Rmetal/gas. We used commercially- available HCLs to measure and characterize these beams with respect to their spatial distribution, their angle of propagation relative to the optical axis, and their values of Rmetal/gas. We conclude that a good imaging of an HCL into a fiber-fed spectrograph would consist of an aperture close to its front window in order to filter out the parts of the beam with low Rmetal/gas, and of a lens to collimate the important central beam. We show that Rmetal/gas can be further improved with only minor adjustments of the imaging parameters, and that the imaging scheme that yields the highest Rmetal/gas does not necessarily provide the highest flux.
Exploration of Mars by Mariner 9 - Television sensors and image processing.
NASA Technical Reports Server (NTRS)
Cutts, J. A.
1973-01-01
Two cameras equipped with selenium sulfur slow scan vidicons were used in the orbital reconnaissance of Mars by the U.S. Spacecraft Mariner 9 and the performance characteristics of these devices are presented. Digital image processing techniques have been widely applied in the analysis of images of Mars and its satellites. Photometric and geometric distortion corrections, image detail enhancement and transformation to standard map projection have been routinely employed. More specializing applications included picture differencing, limb profiling, solar lighting corrections, noise removal, line plots and computer mosaics. Information on enhancements as well as important picture geometric information was stored in a master library. Display of the library data in graphic or numerical form was accomplished by a data management computer program.
About Non-Line-Of-Sight Satellite Detection and Exclusion in a 3D Map-Aided Localization Algorithm
Peyraud, Sébastien; Bétaille, David; Renault, Stéphane; Ortiz, Miguel; Mougel, Florian; Meizel, Dominique; Peyret, François
2013-01-01
Reliable GPS positioning in city environment is a key issue actually, signals are prone to multipath, with poor satellite geometry in many streets. Using a 3D urban model to forecast satellite visibility in urban contexts in order to improve GPS localization is the main topic of the present article. A virtual image processing that detects and eliminates possible faulty measurements is the core of this method. This image is generated using the position estimated a priori by the navigation process itself, under road constraints. This position is then updated by measurements to line-of-sight satellites only. This closed-loop real-time processing has shown very first promising full-scale test results. PMID:23344379
NASA Astrophysics Data System (ADS)
Graves, Mark; Smith, Alexander; Batchelor, Bruce G.; Palmer, Stephen C.
1994-10-01
In the food industry there is an ever increasing need to control and monitor food quality. In recent years fully automated x-ray inspection systems have been used to detect food on-line for foreign body contamination. These systems involve a complex integration of x- ray imaging components with state of the art high speed image processing. The quality of the x-ray image obtained by such systems is very poor compared with images obtained from other inspection processes, this makes reliable detection of very small, low contrast defects extremely difficult. It is therefore extremely important to optimize the x-ray imaging components to give the very best image possible. In this paper we present a method of analyzing the x-ray imaging system in order to consider the contrast obtained when viewing small defects.
Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-11-01
Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.
Unbiased roughness measurements: the key to better etch performance
NASA Astrophysics Data System (ADS)
Liang, Andrew; Mack, Chris; Sirard, Stephen; Liang, Chen-wei; Yang, Liu; Jiang, Justin; Shamma, Nader; Wise, Rich; Yu, Jengyi; Hymes, Diane
2018-03-01
Edge placement error (EPE) has become an increasingly critical metric to enable Moore's Law scaling. Stochastic variations, as characterized for lines by line width roughness (LWR) and line edge roughness (LER), are dominant factors in EPE and known to increase with the introduction of EUV lithography. However, despite recommendations from ITRS, NIST, and SEMI standards, the industry has not agreed upon a methodology to quantify these properties. Thus, differing methodologies applied to the same image often result in different roughness measurements and conclusions. To standardize LWR and LER measurements, Fractilia has developed an unbiased measurement that uses a raw unfiltered line scan to subtract out image noise and distortions. By using Fractilia's inverse linescan model (FILM) to guide development, we will highlight the key influences of roughness metrology on plasma-based resist smoothing processes. Test wafers were deposited to represent a 5 nm node EUV logic stack. The patterning stack consists of a core Si target layer with spin-on carbon (SOC) as the hardmask and spin-on glass (SOG) as the cap. Next, these wafers were exposed through an ASML NXE 3350B EUV scanner with an advanced chemically amplified resist (CAR). Afterwards, these wafers were etched through a variety of plasma-based resist smoothing techniques using a Lam Kiyo conductor etch system. Dense line and space patterns on the etched samples were imaged through advanced Hitachi CDSEMs and the LER and LWR were measured through both Fractilia and an industry standard roughness measurement software. By employing Fractilia to guide plasma-based etch development, we demonstrate that Fractilia produces accurate roughness measurements on resist in contrast to an industry standard measurement software. These results highlight the importance of subtracting out SEM image noise to obtain quicker developmental cycle times and lower target layer roughness.
NASA Astrophysics Data System (ADS)
Wright, Adam A.; Momin, Orko; Shin, Young Ho; Shakya, Rahul; Nepal, Kumud; Ahlgren, David J.
2010-01-01
This paper presents the application of a distributed systems architecture to an autonomous ground vehicle, Q, that participates in both the autonomous and navigation challenges of the Intelligent Ground Vehicle Competition. In the autonomous challenge the vehicle is required to follow a course, while avoiding obstacles and staying within the course boundaries, which are marked by white lines. For the navigation challenge, the vehicle is required to reach a set of target destinations, known as way points, with given GPS coordinates and avoid obstacles that it encounters in the process. Previously the vehicle utilized a single laptop to execute all processing activities including image processing, sensor interfacing and data processing, path planning and navigation algorithms and motor control. National Instruments' (NI) LabVIEW served as the programming language for software implementation. As an upgrade to last year's design, a NI compact Reconfigurable Input/Output system (cRIO) was incorporated to the system architecture. The cRIO is NI's solution for rapid prototyping that is equipped with a real time processor, an FPGA and modular input/output. Under the current system, the real time processor handles the path planning and navigation algorithms, the FPGA gathers and processes sensor data. This setup leaves the laptop to focus on running the image processing algorithm. Image processing as previously presented by Nepal et. al. is a multi-step line extraction algorithm and constitutes the largest processor load. This distributed approach results in a faster image processing algorithm which was previously Q's bottleneck. Additionally, the path planning and navigation algorithms are executed more reliably on the real time processor due to the deterministic nature of operation. The implementation of this architecture required exploration of various inter-system communication techniques. Data transfer between the laptop and the real time processor using UDP packets was established as the most reliable protocol after testing various options. Improvement can be made to the system by migrating more algorithms to the hardware based FPGA to further speed up the operations of the vehicle.
GAP: yet another image processing system for solar observations.
NASA Astrophysics Data System (ADS)
Keller, C. U.
GAP is a versatile, interactive image processing system for analyzing solar observations, in particular extended time sequences, and for preparing publication quality figures. It consists of an interpreter that is based on a language with a control flow similar to PASCAL and C. The interpreter may be accessed from a command line editor and from user-supplied functions, procedures, and command scripts. GAP is easily expandable via external FORTRAN programs that are linked to the GAP interface routines. The current version of GAP runs on VAX, DECstation, Sun, and Apollo computers. Versions for MS-DOS and OS/2 are in preparation.
Ripple-aware optical proximity correction fragmentation for back-end-of-line designs
NASA Astrophysics Data System (ADS)
Wang, Jingyu; Wilkinson, William
2018-01-01
Accurate characterization of image rippling is critical in early detection of back-end-of-line (BEOL) patterning weakpoints, as most defects are strongly associated with excessive rippling that does not get effectively compensated by optical proximity correction (OPC). We correlate image contour with design shapes to account for design geometry-dependent rippling signature, and explore the best practice of OPC fragmentation for BEOL geometries. Specifically, we predict the optimum contour as allowed by the lithographic process and illumination conditions and locate ripple peaks, valleys, and inflection points. This allows us to identify potential process weakpoints and segment the mask accordingly to achieve the best correction results.
3-D interactive visualisation tools for Hi spectral line imaging
NASA Astrophysics Data System (ADS)
van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.
2017-06-01
Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.
Backscatter absorption gas imaging system
McRae, Jr., Thomas G.
1985-01-01
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
Backscatter absorption gas imaging system
McRae, T.G. Jr.
A video imaging system for detecting hazardous gas leaks. Visual displays of invisible gas clouds are produced by radiation augmentation of the field of view of an imaging device by radiation corresponding to an absorption line of the gas to be detected. The field of view of an imager is irradiated by a laser. The imager receives both backscattered laser light and background radiation. When a detectable gas is present, the backscattered laser light is highly attenuated, producing a region of contrast or shadow on the image. A flying spot imaging system is utilized to synchronously irradiate and scan the area to lower laser power requirements. The imager signal is processed to produce a video display.
Image classification of human carcinoma cells using complex wavelet-based covariance descriptors.
Keskin, Furkan; Suhre, Alexander; Kose, Kivanc; Ersahin, Tulin; Cetin, A Enis; Cetin-Atalay, Rengul
2013-01-01
Cancer cell lines are widely used for research purposes in laboratories all over the world. Computer-assisted classification of cancer cells can alleviate the burden of manual labeling and help cancer research. In this paper, we present a novel computerized method for cancer cell line image classification. The aim is to automatically classify 14 different classes of cell lines including 7 classes of breast and 7 classes of liver cancer cells. Microscopic images containing irregular carcinoma cell patterns are represented by subwindows which correspond to foreground pixels. For each subwindow, a covariance descriptor utilizing the dual-tree complex wavelet transform (DT-[Formula: see text]WT) coefficients and several morphological attributes are computed. Directionally selective DT-[Formula: see text]WT feature parameters are preferred primarily because of their ability to characterize edges at multiple orientations which is the characteristic feature of carcinoma cell line images. A Support Vector Machine (SVM) classifier with radial basis function (RBF) kernel is employed for final classification. Over a dataset of 840 images, we achieve an accuracy above 98%, which outperforms the classical covariance-based methods. The proposed system can be used as a reliable decision maker for laboratory studies. Our tool provides an automated, time- and cost-efficient analysis of cancer cell morphology to classify different cancer cell lines using image-processing techniques, which can be used as an alternative to the costly short tandem repeat (STR) analysis. The data set used in this manuscript is available as supplementary material through http://signal.ee.bilkent.edu.tr/cancerCellLineClassificationSampleImages.html.
Image Classification of Human Carcinoma Cells Using Complex Wavelet-Based Covariance Descriptors
Keskin, Furkan; Suhre, Alexander; Kose, Kivanc; Ersahin, Tulin; Cetin, A. Enis; Cetin-Atalay, Rengul
2013-01-01
Cancer cell lines are widely used for research purposes in laboratories all over the world. Computer-assisted classification of cancer cells can alleviate the burden of manual labeling and help cancer research. In this paper, we present a novel computerized method for cancer cell line image classification. The aim is to automatically classify 14 different classes of cell lines including 7 classes of breast and 7 classes of liver cancer cells. Microscopic images containing irregular carcinoma cell patterns are represented by subwindows which correspond to foreground pixels. For each subwindow, a covariance descriptor utilizing the dual-tree complex wavelet transform (DT-WT) coefficients and several morphological attributes are computed. Directionally selective DT-WT feature parameters are preferred primarily because of their ability to characterize edges at multiple orientations which is the characteristic feature of carcinoma cell line images. A Support Vector Machine (SVM) classifier with radial basis function (RBF) kernel is employed for final classification. Over a dataset of 840 images, we achieve an accuracy above 98%, which outperforms the classical covariance-based methods. The proposed system can be used as a reliable decision maker for laboratory studies. Our tool provides an automated, time- and cost-efficient analysis of cancer cell morphology to classify different cancer cell lines using image-processing techniques, which can be used as an alternative to the costly short tandem repeat (STR) analysis. The data set used in this manuscript is available as supplementary material through http://signal.ee.bilkent.edu.tr/cancerCellLineClassificationSampleImages.html. PMID:23341908
Acoustic Reverse Time Migration of the Cascadia Subduction Zone Dataset
NASA Astrophysics Data System (ADS)
Jia, L.; Mallick, S.
2017-12-01
Reverse time migration (RTM) is a wave-equation based migration method, which provides more accurate images than ray-based migration methods, especially for the structures in deep areas, making it an effective tool for imaging the subduction plate boundary. In this work, we extend the work of Fortin (2015) and applied acoustic finite-element RTM on the Cascadia Subduction Zone (CSZ) dataset. The dataset was acquired by Cascadia Open-Access Seismic Transects (COAST) program, targeting the megathrust in the central Cascadia subduction zone (Figure 1). The data on a 2D seismic reflection line that crosses the Juan de Fuca/North American subduction boundary off Washington (Line 5) were pre-processed and worked through Kirchhoff prestack depth migration (PSDM). Figure 2 compares the depth image of Line 5 of the CSZ data using Kirchhoff PSDM (top) and RTM (bottom). In both images, the subducting plate is indicated with yellow arrows. Notice that the RTM image is much superior to the PSDM image by several aspects. First, the plate boundary appears to be much more continuous in the RTM image than the PSDM image. Second, the RTM image indicates the subducting plate is relatively smooth on the seaward (west) side between 0-50 km. Within the deformation front of the accretionary prism (50-80 km), the RTM image shows substantial roughness in the subducting plate. These features are not clear in the PSDM image. Third, the RTM image shows a lot of fine structures below the subducting plate which are almost absent in the PSDM image. Finally, the RTM image indicates that the plate is gently dipping within the undeformed sediment (0-50 km) and becomes steeply dipping beyond 50 km as it enters the deformation front of the accretionary prism. Although the same conclusion could be drawn from the discontinuous plate boundary imaged by PSDM, RTM results are far more convincing than the PSDM.
3D-profile measurement of advanced semiconductor features by using FIB as reference metrology
NASA Astrophysics Data System (ADS)
Takamasu, Kiyoshi; Iwaki, Yuuki; Takahashi, Satoru; Kawada, Hiroki; Ikota, Masami
2017-03-01
A novel method of sub-nanometer uncertainty for the 3D-profile measurement and LWR (Line Width Roughness) measurement by using FIB (Focused Ion Beam) processing, and TEM (Transmission Electron Microscope) and CD-SEM (Critical Dimension Scanning Electron Microscope) images measurement is proposed to standardize 3D-profile measurement through reference metrology. In this article, we apply the methodology to line profile measurements and roughness measurement of advanced FinFET (Fin-shaped Field-Effect Transistor) features. The FinFET features are horizontally sliced as a thin specimen by FIB micro sampling system. Horizontally images of the specimens are obtained then by a planar TEM. LWR is calculated from the edges positions on TEM images. Moreover, we already have demonstrated the novel on-wafer 3D-profile metrology as "FIB-to-CDSEM method" with FIB slope cut and CD-SEM measuring. Using the method, a few micrometers wide on a wafer is coated and cut by 45-degree slope using FIB tool. Then, the wafer is transferred to CD-SEM to measure the cross section image by top down CD-SEM measurement. We applied FIB-to-CDSEM method to a CMOS image sensor feature. The 45-degree slope cut surface is observed using AFM. The surface profile of slope cut surface and line profiles are analyzed for improving the accuracy of FIB-to-CDSEM method.
Identification of cortex in magnetic resonance images
NASA Astrophysics Data System (ADS)
VanMeter, John W.; Sandon, Peter A.
1992-06-01
The overall goal of the work described here is to make available to the neurosurgeon in the operating room an on-line, three-dimensional, anatomically labeled model of the patient brain, based on pre-operative magnetic resonance (MR) images. A stereotactic operating microscope is currently in experimental use, which allows structures that have been manually identified in MR images to be made available on-line. We have been working to enhance this system by combining image processing techniques applied to the MR data with an anatomically labeled 3-D brain model developed from the Talairach and Tournoux atlas. Here we describe the process of identifying cerebral cortex in the patient MR images. MR images of brain tissue are reasonably well described by material mixture models, which identify each pixel as corresponding to one of a small number of materials, or as being a composite of two materials. Our classification algorithm consists of three steps. First, we apply hierarchical, adaptive grayscale adjustments to correct for nonlinearities in the MR sensor. The goal of this preprocessing step, based on the material mixture model, is to make the grayscale distribution of each tissue type constant across the entire image. Next, we perform an initial classification of all tissue types according to gray level. We have used a sum of Gaussian's approximation of the histogram to perform this classification. Finally, we identify pixels corresponding to cortex, by taking into account the spatial patterns characteristic of this tissue. For this purpose, we use a set of matched filters to identify image locations having the appropriate configuration of gray matter (cortex), cerebrospinal fluid and white matter, as determined by the previous classification step.
In-flight edge response measurements for high-spatial-resolution remote sensing systems
NASA Astrophysics Data System (ADS)
Blonski, Slawomir; Pagnutti, Mary A.; Ryan, Robert; Zanoni, Vickie
2002-09-01
In-flight measurements of spatial resolution were conducted as part of the NASA Scientific Data Purchase Verification and Validation process. Characterization included remote sensing image products with ground sample distance of 1 meter or less, such as those acquired with the panchromatic imager onboard the IKONOS satellite and the airborne ADAR System 5500 multispectral instrument. Final image products were used to evaluate the effects of both the image acquisition system and image post-processing. Spatial resolution was characterized by full width at half maximum of an edge-response-derived line spread function. The edge responses were analyzed using the tilted-edge technique that overcomes the spatial sampling limitations of the digital imaging systems. As an enhancement to existing algorithms, the slope of the edge response and the orientation of the edge target were determined by a single computational process. Adjacent black and white square panels, either painted on a flat surface or deployed as tarps, formed the ground-based edge targets used in the tests. Orientation of the deployable tarps was optimized beforehand, based on simulations of the imaging system. The effects of such factors as acquisition geometry, temporal variability, Modulation Transfer Function compensation, and ground sample distance on spatial resolution were investigated.
NASA Technical Reports Server (NTRS)
2003-01-01
With NASA on its side, Positive Systems, Inc., of Whitefish, Montana, is veering away from the industry standards defined for producing and processing remotely sensed images. A top developer of imaging products for geographic information system (GIS) and computer-aided design (CAD) applications, Positive Systems is bucking traditional imaging concepts with a cost-effective and time-saving software tool called Digital Images Made Easy (DIME(trademark)). Like piecing a jigsaw puzzle together, DIME can integrate a series of raw aerial or satellite snapshots into a single, seamless panoramic image, known as a 'mosaic.' The 'mosaicked' images serve as useful backdrops to GIS maps - which typically consist of line drawings called 'vectors' - by allowing users to view a multidimensional map that provides substantially more geographic information.
AMISS - Active and passive MIcrowaves for Security and Subsurface imaging
NASA Astrophysics Data System (ADS)
Soldovieri, Francesco; Slob, Evert; Turk, Ahmet Serdar; Crocco, Lorenzo; Catapano, Ilaria; Di Matteo, Francesca
2013-04-01
The FP7-IRSES project AMISS - Active and passive MIcrowaves for Security and Subsurface imaging is based on a well-combined network among research institutions of EU, Associate and Third Countries (National Research Council of Italy - Italy, Technische Universiteit Delft - The Netherlands, Yildiz Technical University - Turkey, Bauman Moscow State Technical University - Russia, Usikov Institute for Radio-physics and Electronics and State Research Centre of Superconductive Radioelectronics "Iceberg" - Ukraine and University of Sao Paulo - Brazil) with the aims of achieving scientific advances in the framework of microwave and millimeter imaging systems and techniques for security and safety social issues. In particular, the involved partners are leaders in the scientific areas of passive and active imaging and are sharing their complementary knowledge to address two main research lines. The first one regards the design, characterization and performance evaluation of new passive and active microwave devices, sensors and measurement set-ups able to mitigate clutter and increase information content. The second line faces the requirements to make State-of-the-Art processing tools compliant with the instrumentations developed in the first line, suitable to work in electromagnetically complex scenarios and able to exploit the unexplored possibilities offered by new instrumentations. The main goals of the project are: 1) Development/improvement and characterization of new sensors and systems for active and passive microwave imaging; 2) Set up, analysis and validation of state of art/novel data processing approach for GPR in critical infrastructure and subsurface imaging; 3) Integration of state of art and novel imaging hardware and characterization approaches to tackle realistic situations in security, safety and subsurface prospecting applications; 4) Development and feasibility study of bio-radar technology (system and data processing) for vital signs detection and detection/characterization of human beings in complex scenarios. These goals are planned to be reached following a plan of research activities and researchers secondments which cover a period of three years. ACKNOWLEDGMENTS This research has been performed in the framework of the "Active and Passive Microwaves for Security and Subsurface imaging (AMISS)" EU 7th Framework Marie Curie Actions IRSES project (PIRSES-GA-2010-269157).
Multispectral and geomorphic studies of processed Voyager 2 images of Europa
NASA Technical Reports Server (NTRS)
Meier, T. A.
1984-01-01
High resolution images of Europa taken by the Voyager 2 spacecraft were used to study a portion of Europa's dark lineations and the major white line feature Agenor Linea. Initial image processing of images 1195J2-001 (violet filter), 1198J2-001 (blue filter), 1201J2-001 (orange filter), and 1204J2-001 (ultraviolet filter) was performed at the U.S.G.S. Branch of Astrogeology in Flagstaff, Arizona. Processing was completed through the stages of image registration and color ratio image construction. Pixel printouts were used in a new technique of linear feature profiling to compensate for image misregistration through the mapping of features on the printouts. In all, 193 dark lineation segments were mapped and profiled. The more accurate multispectral data derived by this method was plotted using a new application of the ternary diagram, with orange, blue, and violet relative spectral reflectances serving as end members. Statistical techniques were then applied to the ternary diagram plots. The image products generated at LPI were used mainly to cross-check and verify the results of the ternary diagram analysis.
Inselect: Automating the Digitization of Natural History Collections
Hudson, Lawrence N.; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W.; van der Walt, Stéfan; Smith, Vincent S.
2015-01-01
The world’s natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect—a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization. PMID:26599208
Inselect: Automating the Digitization of Natural History Collections.
Hudson, Lawrence N; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W; van der Walt, Stéfan; Smith, Vincent S
2015-01-01
The world's natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect-a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization.
Single-photon imager based on a superconducting nanowire delay line
NASA Astrophysics Data System (ADS)
Zhao, Qing-Yuan; Zhu, Di; Calandri, Niccolò; Dane, Andrew E.; McCaughan, Adam N.; Bellei, Francesco; Wang, Hao-Zhu; Santavicca, Daniel F.; Berggren, Karl K.
2017-03-01
Detecting spatial and temporal information of individual photons is critical to applications in spectroscopy, communication, biological imaging, astronomical observation and quantum-information processing. Here we demonstrate a scalable single-photon imager using a single continuous superconducting nanowire that is not only a single-photon detector but also functions as an efficient microwave delay line. In this context, photon-detection pulses are guided in the nanowire and enable the readout of the position and time of photon-absorption events from the arrival times of the detection pulses at the nanowire's two ends. Experimentally, we slowed down the velocity of pulse propagation to ∼2% of the speed of light in free space. In a 19.7 mm long nanowire that meandered across an area of 286 × 193 μm2, we were able to resolve ∼590 effective pixels with a temporal resolution of 50 ps (full width at half maximum). The nanowire imager presents a scalable approach for high-resolution photon imaging in space and time.
Three-dimensional object recognition based on planar images
NASA Astrophysics Data System (ADS)
Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.
1993-01-01
This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.
Welding deviation detection algorithm based on extremum of molten pool image contour
NASA Astrophysics Data System (ADS)
Zou, Yong; Jiang, Lipei; Li, Yunhua; Xue, Long; Huang, Junfen; Huang, Jiqiang
2016-01-01
The welding deviation detection is the basis of robotic tracking welding, but the on-line real-time measurement of welding deviation is still not well solved by the existing methods. There is plenty of information in the gas metal arc welding(GMAW) molten pool images that is very important for the control of welding seam tracking. The physical meaning for the curvature extremum of molten pool contour is revealed by researching the molten pool images, that is, the deviation information points of welding wire center and the molten tip center are the maxima and the local maxima of the contour curvature, and the horizontal welding deviation is the position difference of these two extremum points. A new method of weld deviation detection is presented, including the process of preprocessing molten pool images, extracting and segmenting the contours, obtaining the contour extremum points, and calculating the welding deviation, etc. Extracting the contours is the premise, segmenting the contour lines is the foundation, and obtaining the contour extremum points is the key. The contour images can be extracted with the method of discrete dyadic wavelet transform, which is divided into two sub contours including welding wire and molten tip separately. The curvature value of each point of the two sub contour lines is calculated based on the approximate curvature formula of multi-points for plane curve, and the two points of the curvature extremum are the characteristics needed for the welding deviation calculation. The results of the tests and analyses show that the maximum error of the obtained on-line welding deviation is 2 pixels(0.16 mm), and the algorithm is stable enough to meet the requirements of the pipeline in real-time control at a speed of less than 500 mm/min. The method can be applied to the on-line automatic welding deviation detection.
NASA Astrophysics Data System (ADS)
Bian, A.; Gantela, C.
2014-12-01
Strong multiples were observed in marine seismic data of Los Angeles Regional Seismic Experiment (LARSE).It is crucial to eliminate these multiples in conventional ray-based or one-way wave-equation based depth image methods. As long as multiples contain information of target zone along travelling path, it's possible to use them as signal, to improve the illumination coverage thus enhance the image quality of structural boundaries. Reverse time migration including multiples is a two-way wave-equation based prestack depth image method that uses both primaries and multiples to map structural boundaries. Several factors, including source wavelet, velocity model, back ground noise, data acquisition geometry and preprocessing workflow may influence the quality of image. The source wavelet is estimated from direct arrival of marine seismic data. Migration velocity model is derived from integrated model building workflow, and the sharp velocity interfaces near sea bottom needs to be preserved in order to generate multiples in the forward and backward propagation steps. The strong amplitude, low frequency marine back ground noise needs to be removed before the final imaging process. High resolution reverse time image sections of LARSE Lines 1 and Line 2 show five interfaces: depth of sea-bottom, base of sedimentary basins, top of Catalina Schist, a deep layer and a possible pluton boundary. Catalina Schist shows highs in the San Clemente ridge, Emery Knoll, Catalina Ridge, under Catalina Basin on both the lines, and a minor high under Avalon Knoll. The high of anticlinal fold in Line 1 is under the north edge of Emery Knoll and under the San Clemente fault zone. An area devoid of any reflection features are interpreted as sides of an igneous plume.
NASA Astrophysics Data System (ADS)
Rust, Thomas Ludwell
Explosive event is the name given to slit spectrograph observations of high spectroscopic velocities in solar transition region spectral lines. Explosive events show much variety that cannot yet be explained by a single theory. It is commonly believed that explosive events are powered by magnetic reconnection. The evolution of the line core appears to be an important indicator of which particular reconnection process is at work. The Multi-Order Solar Extreme Ultraviolet Spectrograph (MOSES) is a novel slitless spectrograph designed for imaging spectroscopy of solar extreme ultraviolet (EUV) spectral lines. The spectrograph design forgoes a slit and images instead at three spectral orders of a concave grating. The images are formed simultaneously so the resulting spatial and spectral information is co-temporal over the 20' x 10' instrument field of view. This is an advantage over slit spectrographs which build a field of view one narrow slit at a time. The cost of co-temporal imaging spectroscopy with the MOSES is increased data complexity relative to slit spectrograph data. The MOSES data must undergo tomographic inversion for recovery of line profiles. I use the unique data from the MOSES to study transition region explosive events in the He ii 304 A spectral line. I identify 41 examples of explosive events which include 5 blue shifted jets, 2 red shifted jets, and 10 bi-directional jets. Typical doppler speeds are approximately 100kms-1. I show the early development of one blue jet and one bi-directional jet and find no acceleration phase at the onset of the event. The bi-directional jets are interesting because they are predicted in models of Petschek reconnection in the transition region. I develop an inversion algorithm for the MOSES data and test it on synthetic observations of a bi-directional jet. The inversion is based on a multiplicative algebraic reconstruction technique (MART). The inversion successfully reproduces synthetic line profiles. I then use the inversion to study the time evolution of a bi-directional jet. The inverted line profiles show fast doppler shifted components and no measurable line core emission. The blue and red wings of the jet show increasing spatial separation with time.
Dynamic deformation inspection of a human arm by using a line-scan imaging system
NASA Astrophysics Data System (ADS)
Hu, Eryi
2009-11-01
A line-scan imaging system is used in the dynamic deformation measurement of a human arm when the muscle is contracting and relaxing. The measurement principle is based on the projection grating profilometry, and the measuring system is consisted of a line-scan CCD camera, a projector, optical lens and a personal computer. The detected human arm is put upon a reference plane, and a sinusoidal grating is projected onto the object surface and reference plane at an incidence angle, respectively. The deformed fringe pattern in the same line of the dynamic detected arm is captured by the line-scan CCD camera with free trigger model, and the deformed fringe pattern is recorded in the personal computer for processing. A fast Fourier transform combining with a filtering and spectrum shifting method is used to extract the phase information caused by the profile of the detected object. Thus, the object surface profile can be obtained following the geometric relationship between the fringe deformation and the object surface height. Furthermore, the deformation procedure can be obtained line by line. Some experimental results are presented to prove the feasibility of the inspection system.
Smet, M H; Breysem, L; Mussen, E; Bosmans, H; Marshall, N W; Cockmartin, L
2018-07-01
To evaluate the impact of digital detector, dose level and post-processing on neonatal chest phantom X-ray image quality (IQ). A neonatal phantom was imaged using four different detectors: a CR powder phosphor (PIP), a CR needle phosphor (NIP) and two wireless CsI DR detectors (DXD and DRX). Five different dose levels were studied for each detector and two post-processing algorithms evaluated for each vendor. Three paediatric radiologists scored the images using European quality criteria plus additional questions on vascular lines, noise and disease simulation. Visual grading characteristics and ordinal regression statistics were used to evaluate the effect of detector type, post-processing and dose on VGA score (VGAS). No significant differences were found between the NIP, DXD and CRX detectors (p>0.05) whereas the PIP detector had significantly lower VGAS (p< 0.0001). Processing did not influence VGAS (p=0.819). Increasing dose resulted in significantly higher VGAS (p<0.0001). Visual grading analysis (VGA) identified a detector air kerma/image (DAK/image) of ~2.4 μGy as an ideal working point for NIP, DXD and DRX detectors. VGAS tracked IQ differences between detectors and dose levels but not image post-processing changes. VGA showed a DAK/image value above which perceived IQ did not improve, potentially useful for commissioning. • A VGA study detects IQ differences between detectors and dose levels. • The NIP detector matched the VGAS of the CsI DR detectors. • VGA data are useful in setting initial detector air kerma level. • Differences in NNPS were consistent with changes in VGAS.
Research and application on imaging technology of line structure light based on confocal microscopy
NASA Astrophysics Data System (ADS)
Han, Wenfeng; Xiao, Zexin; Wang, Xiaofen
2009-11-01
In 2005, the theory of line structure light confocal microscopy was put forward firstly in China by Xingyu Gao and Zexin Xiao in the Institute of Opt-mechatronics of Guilin University of Electronic Technology. Though the lateral resolution of line confocal microscopy can only reach or approach the level of the traditional dot confocal microscopy. But compared with traditional dot confocal microscopy, it has two advantages: first, by substituting line scanning for dot scanning, plane imaging only performs one-dimensional scanning, with imaging velocity greatly improved and scanning mechanism simplified, second, transfer quantity of light is greatly improved by substituting detection hairline for detection pinhole, and low illumination CCD is used directly to collect images instead of photoelectric intensifier. In order to apply the line confocal microscopy to practical system, based on the further research on the theory of the line confocal microscopy, imaging technology of line structure light is put forward on condition of implementation of confocal microscopy. Its validity and reliability are also verified by experiments.
NASA Technical Reports Server (NTRS)
Masuoka, E.
1985-01-01
Systematic noise is present in Airborne Imaging Spectrometer (AIS) data collected on October 26, 1983 and May 5, 1984 in grating position 0 (1.2 to 1.5 microns). In the October data set the noise occurs as 135 scan lines of low DN's every 270 scan lines. The noise is particularly bad in bands nine through thirty, restricting effective analysis to at best ten of the 32 bands. In the May data the regions of severe noise have been eliminated, but systematic noise is present with three frequencies (3, 106 and 200 scan lines) in all thirty two bands. The periodic nature of the noise in both data sets suggests that it could be removed as part of routine processing. This is necessary before classification routines or statistical analyses are used with these data.
Far-infrared and 3D imaging for doneness assessment in chicken breast
NASA Astrophysics Data System (ADS)
Tao, Yang; Ibarra, Juan G.
2001-03-01
Sensor fusion of infrared imaging and range imaging was proposed to estimate internal temperature on just cooked chicken breasts. An infrared camera operating at 8-12 microns registered surface temperature of cooked meat samples, while a single line structured light system located the thickest region of the meat target. In this region of interest, a combined time series/neural network method is applied to correlate the internal and external temperatures during the cool-down process. Experimental verification in a pilot plant oven is presented. To ensure food safety, a mandatory regulation requires all poultry processors in the U.S.A to verify that all ready-to-eat products reach a minimum endpoint temperature (71¦C for chicken breast), but no current assay can do a non-invasively inspection of all the samples. The proposed system has the potential for on-line inspection of ready-to-eat meat for food quality and safety.
NASA Astrophysics Data System (ADS)
Kim, Moon Sung; Lee, Kangjin; Chao, Kaunglin; Lefcourt, Alan; Cho, Byung-Kwan; Jun, Won
We developed a push-broom, line-scan imaging system capable of simultaneous measurements of reflectance and fluorescence. The system allows multitasking inspections for quality and safety attributes of apples due to its dynamic capabilities in simultaneously capturing fluorescence and reflectance, and selectivity in multispectral bands. A multitasking image-based inspection system for online applications has been suggested in that a single imaging device that could perform a multitude of both safety and quality inspection needs. The presented multitask inspection approach in online applications may provide an economically viable means for a number of food processing industries being able to adapt to operate and meet the dynamic and specific inspection and sorting needs.
Live imaging of targeted cell ablation in Xenopus: a new model to study demyelination and repair
Kaya, F.; Mannioui, A.; Chesneau, A.; Sekizar, S.; Maillard, E.; Ballagny, C.; Houel-Renault, L.; Du Pasquier, D.; Bronchain, O.; Holtzmann, I.; Desmazieres, A.; Thomas, J.-L.; Demeneix, B. A.; Brophy, P. J.; Zalc, B.; Mazabraud, A.
2012-01-01
Live imaging studies of the processes of demyelination and remyelination have so far been technically limited in mammals. We have thus generated a Xenopus laevis transgenic line allowing live imaging and conditional ablation of myelinating oligodendrocytes throughout the central nervous system (CNS). In these transgenic pMBP-eGFP-NTR tadpoles the myelin basic protein (MBP) regulatory sequences, specific to mature oligodendrocytes, are used to drive expression of an eGFP (enhanced green fluorescent protein) reporter fused to the E. coli nitroreductase (NTR) selection enzyme. This enzyme converts the innocuous pro-drug metronidazole (MTZ) to a cytotoxin. Using two-photon imaging in vivo, we show that pMBP-eGFP-NTR tadpoles display a graded oligodendrocyte ablation in response to MTZ, which depends on the exposure time to MTZ. MTZ-induced cell death was restricted to oligodendrocytes, without detectable axonal damage. After cessation of MTZ treatment, remyelination proceeded spontaneously, but was strongly accelerated by retinoic acid. Altogether, these features establish the Xenopus pMBP-eGFP-NTR line as a novel in vivo model for the study of demyelination/remyelination processes and for large-scale screens of therapeutic agents promoting myelin repair. PMID:22973012
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Strack, Ruediger
1992-04-01
apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.
NASA Astrophysics Data System (ADS)
Weston, S. D.
2008-04-01
This thesis presents the design and development of a process to model Very Long Base Line Interferometry (VLBI) aperture synthesis antenna arrays. In line with the Auckland University of Technology (AUT) Institute for Radiophysics and Space Research (IRSR) aims to develop the knowledge, skills and experience within New Zealand, extensive use of existing radio astronomical software has been incorporated into the process namely AIPS (Astronomical Imaging Processing System), MIRIAD (a radio interferometry data reduction package) and DIFMAP (a program for synthesis imaging of visibility data from interferometer arrays of radio telescopes). This process has been used to model various antenna array configurations for two proposed New Zealand sites for antenna in a VLBI array configuration with existing Australian facilities and a passable antenna at Scott Base in Antarctica; and the results are presented in an attempt to demonstrate the improvement to be gained by joint trans-Tasman VLBI observation. It is hoped these results and process will assist the planning and placement of proposed New Zealand radio telescopes for cooperation with groups such as the Australian Long Baseline Array (LBA), others in the Pacific Rim and possibly globally; also potential future involvement of New Zealand with the SKA. The developed process has also been used to model a phased building schedule for the SKA in Australia and the addition of two antennas in New Zealand. This has been presented to the wider astronomical community via the Royal Astronomical Society of New Zealand Journal, and is summarized in this thesis with some additional material. A new measure of quality ("figure of merit") for comparing the original model image and final CLEAN images by utilizing normalized 2-D cross correlation is evaluated as an alternative to the existing subjective visual operator image comparison undertaken to date by other groups. This new unit of measure is then used ! in the presentation of the results to provide a quantative comparison of the different array configurations modelled. Included in the process is the development of a new antenna array visibility program which was based on a Perl code script written by Prof Steven Tingay to plot antenna visibilities for the Australian Square Kilometre Array (SKA) proposal. This has been expanded and improved removing the hard coded fixed assumptions for the SKA configuration, providing a new useful and flexible program for the wider astronomical community. A prototype user interface using html/cgi/perl was developed for the process so that the underlying software packages can be served over the web to a user via an internet browser. This was used to demonstrate how easy it is to provide a friendlier interface compared to the existing cumbersome and difficult command line driven interfaces (although the command line can be retained for more experienced users).
Use of landsat ETM+ SLC-off segment-based gap-filled imagery for crop type mapping
Maxwell, S.K.; Craig, M.E.
2008-01-01
Failure of the Scan Line Corrector (SLC) on the Landsat ETM+ sensor has had a major impact on many applications that rely on continuous medium resolution imagery to meet their objectives. The United States Department of Agriculture (USDA) Cropland Data Layer (CDL) program uses Landsat imagery as the primary source of data to produce crop-specific maps for 20 states in the USA. A new method has been developed to fill the image gaps resulting from the SLC failure to support the needs of Landsat users who require coincident spectral data, such as for crop type mapping and monitoring. We tested the new gap-filled method for a CDL crop type mapping project in eastern Nebraska. Scan line gaps were simulated on two Landsat 5 images (spring and late summer 2003) and then gap-filled using landscape boundary models, or segment models, that were derived from 1992 and 2002 Landsat images (used in the gap-fill process). Various date combinations of original and gap-filled images were used to derive crop maps using a supervised classification process. Overall kappa values were slightly higher for crop maps derived from SLC-off gap-filled images compared to crop maps derived from the original imagery (0.3–1.3% higher). Although the age of the segment model used to derive the SLC-off gap-filled product did not negatively impact the overall agreement, differences in individual cover type agreement did increase (−0.8%–1.6% using the 2002 segment model to −5.0–5.1% using the 1992 segment model). Classification agreement also decreased for most of the classes as the size of the segment used in the gap-fill process increased.
Precision targeting in guided munition using IR sensor and MmW radar
NASA Astrophysics Data System (ADS)
Sreeja, S.; Hablani, H. B.; Arya, H.
2015-10-01
Conventional munitions are not guided with sensors and therefore miss the target, particularly if the target is mobile. The miss distance of these munitions can be decreased by incorporating sensors to detect the target and guide the munition during flight. This paper is concerned with a Precision Guided Munition(PGM) equipped with an infrared sensor and a millimeter wave radar [IR and MmW, for short]. Three-dimensional flight of the munition and its pitch and yaw motion models are developed and simulated. The forward and lateral motion of a target tank on the ground is modeled as two independent second-order Gauss-Markov process. To estimate the target location on the ground and the line-of-sight rate to intercept it an Extended Kalman Filter is composed whose state vector consists of cascaded state vectors of missile dynamics and target dynamics. The line-of-sight angle measurement from the infrared seeker is by centroiding the target image in 40 Hz. The centroid estimation of the images in the focal plane is at a frequency of 10 Hz. Every 10 Hz, centroids of four consecutive images are averaged, yielding a time-averaged centroid, implying some measurement delay. The miss distance achieved by including by image processing delays is 1:45m.
Application of the hydroxyl tagging velocimetry to direct-connect supersonic combustor experiment
NASA Astrophysics Data System (ADS)
Ye, Jingfeng; Li, Guohua; Shao, Jun; Hu, Zhiyun; Zhao, Xinyan; Song, WenYan
2017-05-01
For the purpose of measuring the flow velocity in a scramjet test model, an special designed measurement system was established, including the strong vibration suppression, optical transport consideration, the movable device etc. The interference of the strong vibration to the velocity measurements was avoided by two ICCD cameras capturing the reference tag lines image and moved tag lines image together during an experiment. According to the tag lines image feature, data processing including correlation algorithm, data fitting by a Gauss function were used respectively to extract the positions of the reference tag lines and the moved tag lines. The velocity measurements were carried out at the isolation section and the cavity section. The results showed that the well SNR could be achieved in the H2/air combustion heating flow, but in the kerosene fuel combustion flow, the measurements images might be interfered by the strong OH background from the chemical reaction, and the signal intensity could be reduced due to the tag laser attenuation through the absorption by kerosene vapor. But when the combustor model was run at a low chemical equivalent, the interference could be suppressed to an accepted level.
Visual detection of particulates in x-ray images of processed meat products
NASA Astrophysics Data System (ADS)
Schatzki, Thomas F.; Young, Richard; Haff, Ron P.; Eye, J.; Wright, G.
1996-08-01
A study was conducted to test the efficacy of detecting particulate contaminants in processed meat samples by visual observation of line-scanned x-ray images. Six hundred field- collected processed-product samples were scanned at 230 cm2/s using 0.5 X 0.5-mm resolution and 50 kV, 13 mA excitation. The x-ray images were image corrected, digitally stored, and inspected off-line, using interactive image enhancement. Forty percent of the samples were spiked with added contaminants to establish the visual recognition of contaminants as a function of sample thickness (1 to 10 cm), texture of the x-ray image (smooth/textured), spike composition (wood/bone/glass), size (0.1 to 0.4 cm), and shape (splinter/round). The results were analyzed using a maximum likelihood logistic regression method. In packages less than 6 cm thick, 0.2-cm-thick bone chips were easily recognized, 0.1-cm glass splinters were recognized with some difficulty, while 0.4-cm-thick wood was generally missed. Operational feasibility in a time-constrained setting was confirmed. One half percent of the samples arriving from the field contained bone slivers > 1 cm long, 1/2% contained metallic material, while 4% contained particulates exceeding 0.3 cm in size. All of the latter appeared to be bone fragments.
Video-to-film color-image recorder.
NASA Technical Reports Server (NTRS)
Montuori, J. S.; Carnes, W. R.; Shim, I. H.
1973-01-01
A precision video-to-film recorder for use in image data processing systems, being developed for NASA, will convert three video input signals (red, blue, green) into a single full-color light beam for image recording on color film. Argon ion and krypton lasers are used to produce three spectral lines which are independently modulated by the appropriate video signals, combined into a single full-color light beam, and swept over the recording film in a raster format for image recording. A rotating multi-faceted spinner mounted on a translating carriage generates the raster, and an annotation head is used to record up to 512 alphanumeric characters in a designated area outside the image area.
1989-09-01
Guidelines Generation #2 b. Electronic Submission of Commerce Business Daily ( CBD ) Notices #6 c. On-line Debarred/Suspended List #5 d. On-Line Contract...a number of years. Reality of system differs from manual. One reference - easy to follow, block by block - is needed. -Imaging and CBD electronic...milestones are tracked - and those milestones should be monitored as a natural outcome of thc process - e.g. A milestone is noted when the RFP is
Community tools for cartographic and photogrammetric processing of Mars Express HRSC images
Kirk, Randolph L.; Howington-Kraus, Elpitha; Edmundson, Kenneth L.; Redding, Bonnie L.; Galuszka, Donna M.; Hare, Trent M.; Gwinner, K.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.
2017-01-01
The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged ~ 77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was necessary to split observations into blocks of constant exposure time, greatly increasing the effort needed to control the images and collect DTMs. Here, we describe a substantially improved HRSC processing capability that incorporates sensor models with varying line timing in the current ISIS3 system (Sides 2017) and SOCET SET. This enormously reduces the work effort for processing most images and eliminates the artifacts that arose from segmenting them. In addition, the software takes advantage of the continuously evolving capabilities of ISIS3 and the improved image matching module NGATE (Next Generation Automatic Terrain Extraction, incorporating area and feature based algorithms, multi-image and multi-direction matching) of SOCET SET, thus greatly reducing the need for manual editing of DTM errors. We have also developed a procedure for geodetically controlling the images to Mars Orbiter Laser Altimeter (MOLA) data by registering a preliminary stereo topographic model to MOLA by using the point cloud alignment (pc_align) function of the NASA Ames Stereo Pipeline (ASP; Moratto et al. 2010). This effectively converts inter-image tiepoints into ground control points in the MOLA coordinate system. The result is improved absolute accuracy and a significant reduction in work effort relative to manual measurement of ground control. The ISIS and ASP software used are freely available; SOCET SET, is a commercial product. By the end of 2017 we expect to have ported our SOCET SET HRSC sensor model to the Community Sensor Model (CSM; Community Sensor Model Working Group 2010; Hare and Kirk 2017) standard utilized by the successor photogrammetric system SOCET GXP that is currently offered by BAE. In early 2018, we are also working with BAE to release the CSM source code under a BSD or MIT open source license.
NASA Astrophysics Data System (ADS)
Ducoté, Julien; Dettoni, Florent; Bouyssou, Régis; Le-Gratiet, Bertrand; Carau, Damien; Dezauzier, Christophe
2015-03-01
Patterning process control of advanced nodes has required major changes over the last few years. Process control needs of critical patterning levels since 28nm technology node is extremely aggressive showing that metrology accuracy/sensitivity must be finely tuned. The introduction of pitch splitting (Litho-Etch-Litho-Etch) at 14FDSOInm node requires the development of specific metrologies to adopt advanced process control (for CD, overlay and focus corrections). The pitch splitting process leads to final line CD uniformities that are a combination of the CD uniformities of the two exposures, while the space CD uniformities are depending on both CD and OVL variability. In this paper, investigations of CD and OVL process control of 64nm minimum pitch at Metal1 level of 14FDSOI technology, within the double patterning process flow (Litho, hard mask etch, line etch) are presented. Various measurements with SEMCD tools (Hitachi), and overlay tools (KT for Image Based Overlay - IBO, and ASML for Diffraction Based Overlay - DBO) are compared. Metrology targets are embedded within a block instanced several times within the field to perform intra-field process variations characterizations. Specific SEMCD targets were designed for independent measurement of both line CD (A and B) and space CD (A to B and B to A) for each exposure within a single measurement during the DP flow. Based on those measurements correlation between overlay determined with SEMCD and with standard overlay tools can be evaluated. Such correlation at different steps through the DP flow is investigated regarding the metrology type. Process correction models are evaluated with respect to the measurement type and the intra-field sampling.
Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley
2011-05-01
Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.
An approach for traffic prohibition sign detection
NASA Astrophysics Data System (ADS)
Li, Qingquan; Xu, Dihong; Li, Bijun; Zeng, Zhe
2006-10-01
This paper presents an off-line traffic prohibition sign detection approach, whose core is based on combination with the color feature of traffic prohibition signs, shape feature and degree of circularity. Matlab-Image-processing toolbox is used for this purpose. In order to reduce the computational cost, a pre-processing of the image is applied before the core. Then, we employ the obvious redness attribute of prohibition signs to coarsely eliminate the non-redness image in the input data. Again, a edge-detection operator, Canny edge detector, is applied to extract the potential edge. Finally, Degree of circularity is used to verdict the traffic prohibition sign. Experimental results show that our systems offer satisfactory performance.
Liu, Gangjun; Zhang, Jun; Yu, Lingfeng; Xie, Tuqiang; Chen, Zhongping
2010-01-01
With the increase of the A-line speed of optical coherence tomography (OCT) systems, real-time processing of acquired data has become a bottleneck. The shared-memory parallel computing technique is used to process OCT data in real time. The real-time processing power of a quad-core personal computer (PC) is analyzed. It is shown that the quad-core PC could provide real-time OCT data processing ability of more than 80K A-lines per second. A real-time, fiber-based, swept source polarization-sensitive OCT system with 20K A-line speed is demonstrated with this technique. The real-time 2D and 3D polarization-sensitive imaging of chicken muscle and pig tendon is also demonstrated. PMID:19904337
A simple 2D composite image analysis technique for the crystal growth study of L-ascorbic acid.
Kumar, Krishan; Kumar, Virender; Lal, Jatin; Kaur, Harmeet; Singh, Jasbir
2017-06-01
This work was destined for 2D crystal growth studies of L-ascorbic acid using the composite image analysis technique. Growth experiments on the L-ascorbic acid crystals were carried out by standard (optical) microscopy, laser diffraction analysis, and composite image analysis. For image analysis, the growth of L-ascorbic acid crystals was captured as digital 2D RGB images, which were then processed to composite images. After processing, the crystal boundaries emerged as white lines against the black (cancelled) background. The crystal boundaries were well differentiated by peaks in the intensity graphs generated for the composite images. The lengths of crystal boundaries measured from the intensity graphs of composite images were in good agreement (correlation coefficient "r" = 0.99) with the lengths measured by standard microscopy. On the contrary, the lengths measured by laser diffraction were poorly correlated with both techniques. Therefore, the composite image analysis can replace the standard microscopy technique for the crystal growth studies of L-ascorbic acid. © 2017 Wiley Periodicals, Inc.
Hough transform for human action recognition
NASA Astrophysics Data System (ADS)
Siemon, Mia S. N.
2016-09-01
Nowadays, the demand of computer analysis, especially regarding team sports, continues drastically growing. More and more decisions are made by electronic devices for the live to become `easier' to a certain context. There already exist application areas in sports, during which critical situations are being handled by means of digital software. This paper aims at the evaluation and introduction to the necessary foundation which would make it possible to develop a concept similar to that of `hawk-eye', a decision-making program to evaluate the impact of a ball with respect to a target line and to apply it to the sport of volleyball. The pattern recognition process is in this case performed by means of the mathematical model of Hough transform which is able of identifying relevant lines and circles in the image in order to later on use them for the necessary evaluation of the image and the decision-making process.
Determining fast orientation changes of multi-spectral line cameras from the primary images
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2012-01-01
Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.
NASA Astrophysics Data System (ADS)
Kröhnert, M.; Meichsner, R.
2017-09-01
The relevance of globally environmental issues gains importance since the last years with still rising trends. Especially disastrous floods may cause in serious damage within very short times. Although conventional gauging stations provide reliable information about prevailing water levels, they are highly cost-intensive and thus just sparsely installed. Smartphones with inbuilt cameras, powerful processing units and low-cost positioning systems seem to be very suitable wide-spread measurement devices that could be used for geo-crowdsourcing purposes. Thus, we aim for the development of a versatile mobile water level measurement system to establish a densified hydrological network of water levels with high spatial and temporal resolution. This paper addresses a key issue of the entire system: the detection of running water shore lines in smartphone images. Flowing water never appears equally in close-range images even if the extrinsics remain unchanged. Its non-rigid behavior impedes the use of good practices for image segmentation as a prerequisite for water line detection. Consequently, we use a hand-held time lapse image sequence instead of a single image that provides the time component to determine a spatio-temporal texture image. Using a region growing concept, the texture is analyzed for immutable shore and dynamic water areas. Finally, the prevalent shore line is examined by the resultant shapes. For method validation, various study areas are observed from several distances covering urban and rural flowing waters with different characteristics. Future work provides a transformation of the water line into object space by image-to-geometry intersection.
NASA Technical Reports Server (NTRS)
1978-01-01
In public and private archives throughout the world there are many historically important documents that have become illegible with the passage of time. They have faded, been erased, acquired mold, water and dirt stain, suffered blotting or lost readability in other ways. While ultraviolet and infrared photography are widely used to enhance deteriorated legibility, these methods are more limited in their effectiveness than the space-derived image enhancement technique. The aim of the JPL effort with Caltech and others is to better define the requirements for a system to restore illegible information for study at a low page-cost with simple operating procedures. The investigators' principle tools are a vidicon camera and an image processing computer program, the same equipment used to produce sharp space pictures. The camera is the same type as those on NASA's Mariner spacecraft which returned to Earth thousands of images of Mars, Venus and Mercury. Space imagery works something like television. The vidicon camera does not take a photograph in the ordinary sense; rather it "scans" a scene, recording different light and shade values which are reproduced as a pattern of dots, hundreds of dots to a line, hundreds of lines in the total picture. The dots are transmitted to an Earth receiver, where they are assembled line by line to form a picture like that on the home TV screen.
A lane line segmentation algorithm based on adaptive threshold and connected domain theory
NASA Astrophysics Data System (ADS)
Feng, Hui; Xu, Guo-sheng; Han, Yi; Liu, Yang
2018-04-01
Before detecting cracks and repairs on road lanes, it's necessary to eliminate the influence of lane lines on the recognition result in road lane images. Aiming at the problems caused by lane lines, an image segmentation algorithm based on adaptive threshold and connected domain is proposed. First, by analyzing features like grey level distribution and the illumination of the images, the algorithm uses Hough transform to divide the images into different sections and convert them into binary images separately. It then uses the connected domain theory to amend the outcome of segmentation, remove noises and fill the interior zone of lane lines. Experiments have proved that this method could eliminate the influence of illumination and lane line abrasion, removing noises thoroughly while maintaining high segmentation precision.
Image formation simulation for computer-aided inspection planning of machine vision systems
NASA Astrophysics Data System (ADS)
Irgenfried, Stephan; Bergmann, Stephan; Mohammadikaji, Mahsa; Beyerer, Jürgen; Dachsbacher, Carsten; Wörn, Heinz
2017-06-01
In this work, a simulation toolset for Computer Aided Inspection Planning (CAIP) of systems for automated optical inspection (AOI) is presented along with a versatile two-robot-setup for verification of simulation and system planning results. The toolset helps to narrow down the large design space of optical inspection systems in interaction with a system expert. The image formation taking place in optical inspection systems is simulated using GPU-based real time graphics and high quality off-line-rendering. The simulation pipeline allows a stepwise optimization of the system, from fast evaluation of surface patch visibility based on real time graphics up to evaluation of image processing results based on off-line global illumination calculation. A focus of this work is on the dependency of simulation quality on measuring, modeling and parameterizing the optical surface properties of the object to be inspected. The applicability to real world problems is demonstrated by taking the example of planning a 3D laser scanner application. Qualitative and quantitative comparison results of synthetic and real images are presented.
NASA Astrophysics Data System (ADS)
Bonoli, Carlotta; Balestra, Andrea; Bortoletto, Favio; D'Alessandro, Maurizio; Farinelli, Ruben; Medinaceli, Eduardo; Stephen, John; Borsato, Enrico; Dusini, Stefano; Laudisio, Fulvio; Sirignano, Chiara; Ventura, Sandro; Auricchio, Natalia; Corcione, Leonardo; Franceschi, Enrico; Ligori, Sebastiano; Morgante, Gianluca; Patrizii, Laura; Sirri, Gabriele; Trifoglio, Massimo; Valenziano, Luca
2016-07-01
The Near Infrared Spectrograph and Photometer (NISP) is one of the two instruments on board the EUCLID mission now under implementation phase; VIS, the Visible Imager is the second instrument working on the same shared optical beam. The NISP focal plane is based on a detector mosaic deploying 16x, 2048x2048 pixels^2 HAWAII-II HgCdTe detectors, now in advanced delivery phase from Teledyne Imaging Scientific (TIS), and will provide NIR imaging in three bands (Y, J, H) plus slit-less spectroscopy in the range 0.9÷2.0 micron. All the NISP observational modes will be supported by different parametrization of the classic multi-accumulation IR detector readout mode covering the specific needs for spectroscopic, photometric and calibration exposures. Due to the large number of deployed detectors and to the limited satellite telemetry available to ground, a consistent part of the data processing, conventionally performed off-line, will be accomplished on board, in parallel with the flow of data acquisitions. This has led to the development of a specific on-board, HW/SW, data processing pipeline, and to the design of computationally performing control electronics, suited to cope with the time constraints of the NISP acquisition sequences during the sky survey. In this paper we present the architecture of the NISP on-board processing system, directly interfaced to the SIDECAR ASICs system managing the detector focal plane, and the implementation of the on-board pipe-line allowing all the basic operations of input frame averaging, final frame interpolation and data-volume compression before ground down-link.
Preliminary study of rib articulated model based on dynamic fluoroscopy images
NASA Astrophysics Data System (ADS)
Villard, Pierre-Frederic; Escamilla, Pierre; Kerrien, Erwan; Gorges, Sebastien; Trousset, Yves; Berger, Marie-Odile
2014-03-01
We present in this paper a preliminary study of rib motion tracking during Interventional Radiology (IR) fluoroscopy guided procedures. It consists in providing a physician with moving rib three-dimensional (3D) models projected in the fluoroscopy plane during a treatment. The strategy is to help to quickly recognize the target and the no-go areas i.e. the tumor and the organs to avoid. The method consists in i) elaborating a kinematic model of each rib from a preoperative computerized tomography (CT) scan, ii) processing the on-line fluoroscopy image and iii) optimizing the parameters of the kinematic law such as the transformed 3D rib projected on the medical image plane fit well with the previously processed image. The results show a visually good rib tracking that has been quantitatively validated by showing a periodic motion as well as a good synchronism between ribs.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
Automated road network extraction from high spatial resolution multi-spectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Qiaoping
For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.
A REMOTE SENSING AND GIS-ENABLED HIGHWAY ASSET MANAGEMENT SYSTEM PHASE 2
DOT National Transportation Integrated Search
2018-02-02
The objective of this project is to validate the use of commercial remote sensing and spatial information (CRS&SI) technologies, including emerging 3D line laser imaging technology, mobile light detection and ranging (LiDAR), image processing algorit...
A remote sensing and GIS-enabled highway asset management system : final report.
DOT National Transportation Integrated Search
2016-04-01
The objective of this project is to validate the use of commercial remote sensing and spatial information : (CRS&SI) technologies, including emerging 3D line laser imaging technology, mobile LiDAR, image : processing algorithms, and GPS/GIS technolog...
Application of digital image processing techniques to astronomical imagery, 1979
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1979-01-01
Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.
Recent progress in the development of ISO 19751
NASA Astrophysics Data System (ADS)
Farnand, Susan P.; Dalal, Edul N.; Ng, Yee S.
2006-01-01
A small number of general visual attributes have been recognized as essential in describing image quality. These include micro-uniformity, macro-uniformity, colour rendition, text and line quality, gloss, sharpness, and spatial adjacency or temporal adjacency attributes. The multiple-part International Standard discussed here was initiated by the INCITS W1 committee on the standardization of office equipment to address the need for unambiguously documented procedures and methods, which are widely applicable over the multiple printing technologies employed in office applications, for the appearance-based evaluation of these visually significant image quality attributes of printed image quality. 1,2 The resulting proposed International Standard, for which ISO/IEC WD 19751-1 3 presents an overview and an outline of the overall procedure and common methods, is based on a proposal that was predicated on the idea that image quality could be described by a small set of broad-based attributes. 4 Five ad hoc teams were established (now six since a sharpness team is in the process of being formed) to generate standards for one or more of these image quality attributes. Updates on the colour rendition, text and line quality, and gloss attributes are provided.
Toward A Fail-Safe Air Force Culture: Creating a Resilient Future While Avoiding Past Mistakes
2012-10-01
process often uses the “ Swiss cheese ” model to evaluate accidents. The image of holes in the protective cheese layers (proactive and reactive measures...minimize the number and size of holes in each slice of cheese . More importantly, however, a HRO’s 16 focus is on “the process of the slices lining up as
Guy, Kristy K.
2015-01-01
This Data Series Report includes several open-ocean shorelines, back-island shorelines, back-island shoreline points, sand area polygons, and sand lines for Assateague Island that were extracted from natural-color orthoimagery (aerial photography) dated from April 12, 1989, to September 5, 2013. The images used were 0.3–2-meter (m)-resolution U.S. Geological Survey Digital Orthophoto Quarter Quads (DOQQ), U.S. Department of Agriculture National Agriculture Imagery Program (NAIP) images, and Virginia Geographic Information Network Virginia Base Map Program (VBMP) images courtesy of the Commonwealth of Virginia. The back-island shorelines were hand-digitized at the intersect of the apparent back-island shoreline and transects spaced at 20-m intervals. The open-ocean shorelines were hand-digitized at the approximate still water level, such as tide level, which was fit through the average position of waves and swash apparent on the beach. Hand-digitizing was done at a scale of approximately 1:2,000. The sand polygons were derived by using an image-processing unsupervised classification technique that separates images into classes. The classes were then visually categorized as either sand or not sand. Also included in this report are 20-m-spaced transect lines and the transect base lines.
A method for smoothing segmented lung boundary in chest CT images
NASA Astrophysics Data System (ADS)
Yim, Yeny; Hong, Helen
2007-03-01
To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.
The ALMA Science Pipeline: Current Status
NASA Astrophysics Data System (ADS)
Humphreys, Elizabeth; Miura, Rie; Brogan, Crystal L.; Hibbard, John; Hunter, Todd R.; Indebetouw, Remy
2016-09-01
The ALMA Science Pipeline is being developed for the automated calibration and imaging of ALMA interferometric and single-dish data. The calibration Pipeline for interferometric data was accepted for use by ALMA Science Operations in 2014, and for single-dish data end-to-end processing in 2015. However, work is ongoing to expand the use cases for which the Pipeline can be used e.g. for higher frequency and lower signal-to-noise datasets, and for new observing modes. A current focus includes the commissioning of science target imaging for interferometric data. For the Single Dish Pipeline, the line finding algorithm used in baseline subtraction and baseline flagging heuristics have been greately improved since the prototype used for data from the previous cycle. These algorithms, unique to the Pipeline, produce better results than standard manual processing in many cases. In this poster, we report on the current status of the Pipeline capabilities, present initial results from the Imaging Pipeline, and the smart line finding and flagging algorithm used in the Single Dish Pipeline. The Pipeline is released as part of CASA (the Common Astronomy Software Applications package).
ERIC Educational Resources Information Center
Lewicki, Martin; Hughes, Stephen
2012-01-01
This article describes a method for making a spectroscope from scrap materials, i.e. a fragment of compact disc, a cardboard box, a tube and a digital camera to record the spectrum. An image processing program such as ImageJ can be used to calculate the wavelength of emission and absorption lines from the digital photograph. Multiple images of a…
Hall, Elise M; Thurow, Brian S; Guildenbecher, Daniel R
2016-08-10
Digital in-line holography (DIH) and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with DIH. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and DIH successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-component velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. In contrast, plenoptic imaging allows for a simpler experimental configuration and, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments.
IfA Catalogs of Solar Data Products
NASA Astrophysics Data System (ADS)
Habbal, Shadia R.; Scholl, I.; Morgan, H.
2009-05-01
This paper presents a new set of online catalogs of solar data products. The IfA Catalogs of Solar Data Products were developed to enhance the scientific output of coronal images acquired from ground and space, starting with the SoHO era. Image processing tools have played a significant role in the production of these catalogs [Morgan et al. 2006, 2008, Scholl and Habbal 2008]. Two catalogs are currently available at http://alshamess.ifa.hawaii.edu/ : 1) Catalog of daily coronal images: One coronal image per day from EIT, MLSO and LASCO/C2 and C3 have been processed using the Normalizing Radial-Graded-Filter (NRGF) image processing tool. These images are available individually or as composite images. 2) Catalog of LASCO data: The whole LASCO dataset has been re-processed using the same method. The user can search files by dates and instruments, and images can be retrieved as JPEG or FITS files. An option to make on-line GIF movies from selected images is also available. In addition, the LASCO data set can be searched from existing CME catalogs (CDAW and Cactus). By browsing one of the two CME catalogs, the user can refine the query and access LASCO data covering the time frame of a CME. The catalogs will be continually updated as more data become publicly available.
Image processing for improved eye-tracking accuracy
NASA Technical Reports Server (NTRS)
Mulligan, J. B.; Watson, A. B. (Principal Investigator)
1997-01-01
Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.
Model of the lines of sight for an off-axis optical instrument Pleiades
NASA Astrophysics Data System (ADS)
Sauvage, Dominique; Gaudin-Delrieu, Catherine; Tournier, Thierry
2017-11-01
The future Earth observation missions aim at delivering images with a high resolution and a large field of view. These images have to be processed to get a very accurate localisation. In that goal, the individual lines of sight of each photosensitive element must be evaluated according to the localisation of the pixels in the focal plane. But, with off-axis Korsch telescope (like PLEIADES), the classical model has to be adapted. This is possible by using optical ground measurements made after the integration of the instrument. The processing of these results leads to several parameters, which are function of the offsets of the focal plane and the real focal length. All this study which has been proposed for the PLEIADES mission leads to a more elaborated model which provides the relation between the lines of sight and the location of the pixels, with a very good accuracy, close to the pixel size.
The vision guidance and image processing of AGV
NASA Astrophysics Data System (ADS)
Feng, Tongqing; Jiao, Bin
2017-08-01
Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.
Quantitative analysis of brain magnetic resonance imaging for hepatic encephalopathy
NASA Astrophysics Data System (ADS)
Syh, Hon-Wei; Chu, Wei-Kom; Ong, Chin-Sing
1992-06-01
High intensity lesions around ventricles have recently been observed in T1-weighted brain magnetic resonance images for patients suffering hepatic encephalopathy. The exact etiology that causes magnetic resonance imaging (MRI) gray scale changes has not been totally understood. The objective of our study was to investigate, through quantitative means, (1) the amount of changes to brain white matter due to the disease process, and (2) the extent and distribution of these high intensity lesions, since it is believed that the abnormality may not be entirely limited to the white matter only. Eleven patients with proven haptic encephalopathy and three normal persons without any evidence of liver abnormality constituted our current data base. Trans-axial, sagittal, and coronal brain MRI were obtained on a 1.5 Tesla scanner. All processing was carried out on a microcomputer-based image analysis system in an off-line manner. Histograms were decomposed into regular brain tissues and lesions. Gray scale ranges coded as lesion were then brought back to original images to identify distribution of abnormality. Our results indicated the disease process involved pallidus, mesencephalon, and subthalamic regions.
NASA Astrophysics Data System (ADS)
Tournaire, O.; Paparoditis, N.
Road detection has been a topic of great interest in the photogrammetric and remote sensing communities since the end of the 70s. Many approaches dealing with various sensor resolutions, the nature of the scene or the wished accuracy of the extracted objects have been presented. This topic remains challenging today as the need for accurate and up-to-date data is becoming more and more important. Based on this context, we will study in this paper the road network from a particular point of view, focusing on road marks, and in particular dashed lines. Indeed, they are very useful clues, for evidence of a road, but also for tasks of a higher level. For instance, they can be used to enhance quality and to improve road databases. It is also possible to delineate the different circulation lanes, their width and functionality (speed limit, special lanes for buses or bicycles...). In this paper, we propose a new robust and accurate top-down approach for dashed line detection based on stochastic geometry. Our approach is automatic in the sense that no intervention from a human operator is necessary to initialise the algorithm or to track errors during the process. The core of our approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also has to deal with the interactions between the different objects making up a line, meaning that it introduces external knowledge taken from specifications. Our strategy is based on a stochastic method, and in particular marked point processes. Our goal is to find the objects configuration minimising an energy function made-up of a data attachment term measuring the consistency of the image with respect to the objects and a regularising term managing the relationship between neighbouring objects. To sample the energy function, we use Green algorithm's; coupled with a simulated annealing to find its minimum. Results from aerial images at various resolutions are presented showing that our approach is relevant and accurate as it can handle the most frequent layouts of dashed lines. Some issues, for instance, such as the relative weighting of both terms of the energy are also discussed in the conclusion.
Real time display Fourier-domain OCT using multi-thread parallel computing with data vectorization
NASA Astrophysics Data System (ADS)
Eom, Tae Joong; Kim, Hoon Seop; Kim, Chul Min; Lee, Yeung Lak; Choi, Eun-Seo
2011-03-01
We demonstrate a real-time display of processed OCT images using multi-thread parallel computing with a quad-core CPU of a personal computer. The data of each A-line are treated as one vector to maximize the data translation rate between the cores of the CPU and RAM stored image data. A display rate of 29.9 frames/sec for processed OCT data (4096 FFT-size x 500 A-scans) is achieved in our system using a wavelength swept source with 52-kHz swept frequency. The data processing times of the OCT image and a Doppler OCT image with a 4-time average are 23.8 msec and 91.4 msec.
Machine vision inspection of lace using a neural network
NASA Astrophysics Data System (ADS)
Sanby, Christopher; Norton-Wayne, Leonard
1995-03-01
Lace is particularly difficult to inspect using machine vision since it comprises a fine and complex pattern of threads which must be verified, on line and in real time. Small distortions in the pattern are unavoidable. This paper describes instrumentation for inspecting lace actually on the knitting machine. A CCD linescan camera synchronized to machine motions grabs an image of the lace. Differences between this lace image and a perfect prototype image are detected by comparison methods, thresholding techniques, and finally, a neural network (to distinguish real defects from false alarms). Though produced originally in a laboratory on SUN Sparc work-stations, the processing has subsequently been implemented on a 50 Mhz 486 PC-look-alike. Successful operation has been demonstrated in a factory, but over a restricted width. Full width coverage awaits provision of faster processing.
NASA Astrophysics Data System (ADS)
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
An Imaging And Graphics Workstation For Image Sequence Analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-01-01
This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.
Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J
2015-10-01
To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.
Iterative CT shading correction with no prior information
NASA Astrophysics Data System (ADS)
Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye
2015-11-01
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical and attractive as a general solution to CT shading correction.
NASA Astrophysics Data System (ADS)
Lu, Xiaodong; Wu, Tianze; Zhou, Jun; Zhao, Bin; Ma, Xiaoyuan; Tang, Xiucheng
2016-03-01
An electronic image stabilization method compounded with inertia information, which can compensate the coupling interference caused by the pitch-yaw movement of the optical stable platform system, has been proposed in this paper. Firstly the mechanisms of coning rotation and lever-arm translation of line of sight (LOS) are analyzed during the stabilization process under moving carriers, and the mathematical model which describes the relationship between LOS rotation angle and platform attitude angle are derived. Then the image spin angle caused by coning rotation is estimated by using inertia information. Furthermore, an adaptive block matching method, which based on image edge and angular point, is proposed to smooth the jitter created by the lever-arm translation. This method optimizes the matching process and strategies. Finally, the results of hardware-in-the-loop simulation verified the effectiveness and real-time performance of the proposed method.
NASA Astrophysics Data System (ADS)
Kozawa, Takahiro; Oizumi, Hiroaki; Itani, Toshiro; Tagawa, Seiichi
2010-11-01
The development of extreme ultraviolet (EUV) lithography has progressed owing to worldwide effort. As the development status of EUV lithography approaches the requirements for the high-volume production of semiconductor devices with a minimum line width of 22 nm, the extraction of resist parameters becomes increasingly important from the viewpoints of the accurate evaluation of resist materials for resist screening and the accurate process simulation for process and mask designs. In this study, we demonstrated that resist parameters (namely, quencher concentration, acid diffusion constant, proportionality constant of line edge roughness, and dissolution point) can be extracted from the scanning electron microscopy (SEM) images of patterned resists without the knowledge on the details of resist contents using two types of latest EUV resist.
NASA Astrophysics Data System (ADS)
Wang, Yang; Wang, Qianqian
2008-12-01
When laser ranger is transported or used in field operations, the transmitting axis, receiving axis and aiming axis may be not parallel. The nonparallelism of the three-light-axis will affect the range-measuring ability or make laser ranger not be operated exactly. So testing and adjusting the three-light-axis parallelity in the production and maintenance of laser ranger is important to ensure using laser ranger reliably. The paper proposes a new measurement method using digital image processing based on the comparison of some common measurement methods for the three-light-axis parallelity. It uses large aperture off-axis paraboloid reflector to get the images of laser spot and white light cross line, and then process the images on LabVIEW platform. The center of white light cross line can be achieved by the matching arithmetic in LABVIEW DLL. And the center of laser spot can be achieved by gradation transformation, binarization and area filter in turn. The software system can set CCD, detect the off-axis paraboloid reflector, measure the parallelity of transmitting axis and aiming axis and control the attenuation device. The hardware system selects SAA7111A, a programmable vedio decoding chip, to perform A/D conversion. FIFO (first-in first-out) is selected as buffer.USB bus is used to transmit data to PC. The three-light-axis parallelity can be achieved according to the position bias between them. The device based on this method has been already used. The application proves this method has high precision, speediness and automatization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doug Blankenship
PDFs of seismic reflection profiles 101,110, 111 local to the West Flank FORGE site. 45 line kilometers of seismic reflection data are processed data collected in 2001 through the use of vibroseis trucks. The initial analysis and interpretation of these data was performed by Unruh et al. (2001). Optim processed these data by inverting the P-wave first arrivals to create a 2-D velocity structure. Kirchhoff images were then created for each line using velocity tomograms (Unruh et al., 2001).
Holographic digital microscopy in on-line process control
NASA Astrophysics Data System (ADS)
Osanlou, Ardeshir
2011-09-01
This article investigates the feasibility of real-time three-dimensional imaging of microscopic objects within various emulsions while being produced in specialized production vessels. The study is particularly relevant to on-line process monitoring and control in chemical, pharmaceutical, food, cleaning, and personal hygiene industries. Such processes are often dynamic and the materials cannot be measured once removed from the production vessel. The technique reported here is applicable to three-dimensional characterization analyses on stirred fluids in small reaction vessels. Relatively expensive pulsed lasers have been avoided through the careful control of the speed of the moving fluid in relation to the speed of the camera exposure and the wavelength of the continuous wave laser used. The ultimate aim of the project is to introduce a fully robust and compact digital holographic microscope as a process control tool in a full size specialized production vessel.
Building Facade Modeling Under Line Feature Constraint Based on Close-Range Images
NASA Astrophysics Data System (ADS)
Liang, Y.; Sheng, Y. H.
2018-04-01
To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence, especially in restoring the contour information of the facade of buildings.
NASA Technical Reports Server (NTRS)
Begni, G.; BOISSIN; Desachy, M. J.; PERBOS
1984-01-01
The geometric accuray of LANDSAT TM raw data of Toulouse (France) raw data of Mississippi, and preprocessed data of Mississippi was examined using a CDC computer. Analog images were restituted on the VIZIR SEP device. The methods used for line to line and band to band registration are based on automatic correlation techniques and are widely used in automated image to image registration at CNES. Causes of intraband and interband misregistration are identified and statistics are given for both line to line and band to band misregistration.
On-Line GIS Analysis and Image Processing for Geoportal Kielce/poland Development
NASA Astrophysics Data System (ADS)
Hejmanowska, B.; Głowienka, E.; Florek-Paszkowski, R.
2016-06-01
GIS databases are widely available on the Internet, but mainly for visualization with limited functionality; very simple queries are possible i.e. attribute query, coordinate readout, line and area measurements or pathfinder. A little more complex analysis (i.e. buffering or intersection) are rare offered. Paper aims at the concept of Geoportal functionality development in the field of GIS analysis. Multi-Criteria Evaluation (MCE) is planned to be implemented in web application. OGC Service is used for data acquisition from the server and results visualization. Advanced GIS analysis is planned in PostGIS and Python programming. In the paper an example of MCE analysis basing on Geoportal Kielce is presented. Other field where Geoportal can be developed is implementation of processing new available satellite images free of charge (Sentinel-2, Landsat 8, ASTER, WV-2). Now we are witnessing a revolution in access to the satellite imagery without charge. This should result in an increase of interest in the use of these data in various fields by a larger number of users, not necessarily specialists in remote sensing. Therefore, it seems reasonable to expand the functionality of Internet's tools for data processing by non-specialists, by automating data collection and prepared predefined analysis.
A coloured oil level indicator detection method based on simple linear iterative clustering
NASA Astrophysics Data System (ADS)
Liu, Tianli; Li, Dongsong; Jiao, Zhiming; Liang, Tao; Zhou, Hao; Yang, Guoqing
2017-12-01
A detection method of coloured oil level indicator is put forward. The method is applied to inspection robot in substation, which realized the automatic inspection and recognition of oil level indicator. Firstly, the detected image of the oil level indicator is collected, and the detected image is clustered and segmented to obtain the label matrix of the image. Secondly, the detection image is processed by colour space transformation, and the feature matrix of the image is obtained. Finally, the label matrix and feature matrix are used to locate and segment the detected image, and the upper edge of the recognized region is obtained. If the upper limb line exceeds the preset oil level threshold, the alarm will alert the station staff. Through the above-mentioned image processing, the inspection robot can independently recognize the oil level of the oil level indicator, and instead of manual inspection. It embodies the automatic and intelligent level of unattended operation.
An open architecture for medical image workstation
NASA Astrophysics Data System (ADS)
Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun
2005-04-01
Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.
Imaging through water turbulence with a plenoptic sensor
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2016-09-01
A plenoptic sensor can be used to improve the image formation process in a conventional camera. Through this process, the conventional image is mapped to an image array that represents the image's photon paths along different angular directions. Therefore, it can be used to resolve imaging problems where severe distortion happens. Especially for objects observed at moderate range (10m to 200m) through turbulent water, the image can be twisted to be entirely unrecognizable and correction algorithms need to be applied. In this paper, we show how to use a plenoptic sensor to recover an unknown object in line of sight through significant water turbulence distortion. In general, our approach can be applied to both atmospheric turbulence and water turbulence conditions.
Linear prediction data extrapolation superresolution radar imaging
NASA Astrophysics Data System (ADS)
Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing
1993-05-01
Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.
Electrophoresis gel image processing and analysis using the KODAK 1D software.
Pizzonia, J
2001-06-01
The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.
Dynamic Evolution in the Symbiotic R Aquarii
NASA Technical Reports Server (NTRS)
DePasquale, J. M.; Nichols, J. S.; Kellogg, E. M.
2007-01-01
We report on multiple Chandra observations spanning a period of 5 years as well as a more recent XMM observation of the nearby symbiotic binary R Aqr. Spectral analysis of these four observations reveals considerable variability in hardness ratios and in the strength and ionization levels of emission lines which provides insight into white dwarf accretion processes as well as continuum and line formation mechanisms. Chandra imaging of the central source also shows the formation and evolution of a new south west jet. This growing body of high-resolution X-ray data of R Aqr provides a unique glimpse into white dwarf wind-accretion processes and jet formation.
Satellite Data Processing System (SDPS) users manual V1.0
NASA Technical Reports Server (NTRS)
Caruso, Michael; Dunn, Chris
1989-01-01
SDPS is a menu driven interactive program designed to facilitate the display and output of image and line-based data sets common to telemetry, modeling and remote sensing. This program can be used to display up to four separate raster images and overlay line-based data such as coastlines, ship tracks and velocity vectors. The program uses multiple windows to communicate information with the user. At any given time, the program may have up to four image display windows as well as auxiliary windows containing information about each image displayed. SDPS is not a commercial program. It does not contain complete type checking or error diagnostics which may allow the program to crash. Known anomalies will be mentioned in the appropriate section as notes or cautions. SDPS was designed to be used on Sun Microsystems Workstations running SunView1 (Sun Visual/Integrated Environment for Workstations). It was primarily designed to be used on workstations equipped with color monitors, but most of the line-based functions and several of the raster-based functions can be used with monochrome monitors. The program currently runs on Sun 3 series workstations running Sun OS 4.0 and should port easily to Sun 4 and Sun 386 series workstations with SunView1. Users should also be familiar with UNIX, Sun workstations and the SunView window system.
Non-rigid ultrasound image registration using generalized relaxation labeling process
NASA Astrophysics Data System (ADS)
Lee, Jong-Ha; Seong, Yeong Kyeong; Park, MoonHo; Woo, Kyoung-Gu; Ku, Jeonghun; Park, Hee-Jun
2013-03-01
This research proposes a novel non-rigid registration method for ultrasound images. The most predominant anatomical features in medical images are tissue boundaries, which appear as edges. In ultrasound images, however, other features can be identified as well due to the specular reflections that appear as bright lines superimposed on the ideal edge location. In this work, an image's local phase information (via the frequency domain) is used to find the ideal edge location. The generalized relaxation labeling process is then formulated to align the feature points extracted from the ideal edge location. In this work, the original relaxation labeling method was generalized by taking n compatibility coefficient values to improve non-rigid registration performance. This contextual information combined with a relaxation labeling process is used to search for a correspondence. Then the transformation is calculated by the thin plate spline (TPS) model. These two processes are iterated until the optimal correspondence and transformation are found. We have tested our proposed method and the state-of-the-art algorithms with synthetic data and bladder ultrasound images of in vivo human subjects. Experiments show that the proposed method improves registration performance significantly, as compared to other state-of-the-art non-rigid registration algorithms.
Quality Assurance By Laser Scanning And Imaging Techniques
NASA Astrophysics Data System (ADS)
SchmalfuB, Harald J.; Schinner, Karl Ludwig
1989-03-01
Laser scanning systems are well established in the world of fast industrial in-process quality inspection systems. The materials inspected by laser scanning systems are e.g. "endless" sheets of steel, paper, textile, film or foils. The web width varies from 50 mm up to 5000 mm or more. The web speed depends strongly on the production process and can reach several hundred meters per minute. The continuous data flow in one of different channels of the optical receiving system exceeds ten Megapixels/sec. Therefore it is clear that the electronic evaluation system has to process these data streams in real time and no image storage is possible. But sometimes (e.g. first installation of the system, change of the defect classification) it would be very helpful to have the possibility for a visual look on the original, i.e. not processed sensor data. At first we show the principle set up of a standard laser scanning system. Then we will introduce a large image memory especially designed for the needs of high-speed inspection sensors. This image memory co-operates with the standard on-line evaluation electronics and provides therefore an easy comparison between processed and non-processed data. We will discuss the basic system structure and we will show the first industrial results.
Image analysis and mathematical modelling for the supervision of the dough fermentation process
NASA Astrophysics Data System (ADS)
Zettel, Viktoria; Paquet-Durand, Olivier; Hecker, Florian; Hitzmann, Bernd
2016-10-01
The fermentation (proof) process of dough is one of the quality-determining steps in the production of baking goods. Beside the fluffiness, whose fundaments are built during fermentation, the flavour of the final product is influenced very much during this production stage. However, until now no on-line measurement system is available, which can supervise this important process step. In this investigation the potential of an image analysis system is evaluated, that enables the determination of the volume of fermented dough pieces. The camera is moving around the fermenting pieces and collects images from the objects by means of different angles (360° range). Using image analysis algorithms the volume increase of individual dough pieces is determined. Based on a detailed mathematical description of the volume increase, which based on the Bernoulli equation, carbon dioxide production rate of yeast cells and the diffusion processes of carbon dioxide, the fermentation process is supervised. Important process parameters, like the carbon dioxide production rate of the yeast cells and the dough viscosity can be estimated just after 300 s of proofing. The mean percentage error for forecasting the further evolution of the relative volume of the dough pieces is just 2.3 %. Therefore, a forecast of the further evolution can be performed and used for fault detection.
NASA Technical Reports Server (NTRS)
1998-01-01
Perceptive Scientific Instruments, Inc., provides the foundation for the Powergene line of chromosome analysis and molecular genetic instrumentation. This product employs image processing technology from NASA's Jet Propulsion Laboratory and image enhancement techniques from Johnson Space Center. Originally developed to send pictures back to earth from space probes, digital imaging techniques have been developed and refined for use in a variety of medical applications, including diagnosis of disease.
Rudiments of curvelet with applications
NASA Astrophysics Data System (ADS)
Zahra, Noor e.
2012-07-01
Curvelet transform is now a days a favored tool for image processing. Edges are an important part of an image and usually they are not straight lines. Curvelet prove to be very efficient in representing curve like edges. In this chapter application of curvelet is shown with some examples like seismic wave analysis, oil exploration, fingerprint identification and biomedical images like mammography and MRI.
Ultra high speed image processing techniques. [electronic packaging techniques
NASA Technical Reports Server (NTRS)
Anthony, T.; Hoeschele, D. F.; Connery, R.; Ehland, J.; Billings, J.
1981-01-01
Packaging techniques for ultra high speed image processing were developed. These techniques involve the development of a signal feedthrough technique through LSI/VLSI sapphire substrates. This allows the stacking of LSI/VLSI circuit substrates in a 3 dimensional package with greatly reduced length of interconnecting lines between the LSI/VLSI circuits. The reduced parasitic capacitances results in higher LSI/VLSI computational speeds at significantly reduced power consumption levels.
Design and laboratory calibration of the compact pushbroom hyperspectral imaging system
NASA Astrophysics Data System (ADS)
Zhou, Jiankang; Ji, Yiqun; Chen, Yuheng; Chen, Xinhua; Shen, Weimin
2009-11-01
The designed hyperspectral imaging system is composed of three main parts, that is, optical subsystem, electronic subsystem and capturing subsystem. And a three-dimensional "image cube" can be obtained through push-broom. The fore-optics is commercial-off-the-shelf with high speed and three continuous zoom ratios. Since the dispersive imaging part is based on Offner relay configuration with an aberration-corrected convex grating, high power of light collection and variable view field are obtained. The holographic recording parameters of the convex grating are optimized, and the aberration of the Offner configuration dispersive system is balanced. The electronic system adopts module design, which can minimize size, mass, and power consumption. Frame transfer area-array CCD is chosen as the image sensor and the spectral line can be binned to achieve better SNR and sensitivity without any deterioration in spatial resolution. The capturing system based on the computer can set the capturing parameters, calibrate the spectrometer, process and display spectral imaging data. Laboratory calibrations are prerequisite for using precise spectral data. The spatial and spectral calibration minimize smile and keystone distortion caused by optical system, assembly and so on and fix positions of spatial and spectral line on the frame area-array CCD. Gases excitation lamp is used in smile calibration and the keystone calculation is carried out by different viewing field point source created by a series of narrow slit. The laboratory and field imaging results show that this pushbroom hyperspectral imaging system can acquire high quality spectral images.
Detection of the power lines in UAV remote sensed images using spectral-spatial methods.
Bhola, Rishav; Krishna, Nandigam Hari; Ramesh, K N; Senthilnath, J; Anand, Gautham
2018-01-15
In this paper, detection of the power lines on images acquired by Unmanned Aerial Vehicle (UAV) based remote sensing is carried out using spectral-spatial methods. Spectral clustering was performed using Kmeans and Expectation Maximization (EM) algorithm to classify the pixels into the power lines and non-power lines. The spectral clustering methods used in this study are parametric in nature, to automate the number of clusters Davies-Bouldin index (DBI) is used. The UAV remote sensed image is clustered into the number of clusters determined by DBI. The k clustered image is merged into 2 clusters (power lines and non-power lines). Further, spatial segmentation was performed using morphological and geometric operations, to eliminate the non-power line regions. In this study, UAV images acquired at different altitudes and angles were analyzed to validate the robustness of the proposed method. It was observed that the EM with spatial segmentation (EM-Seg) performed better than the Kmeans with spatial segmentation (Kmeans-Seg) on most of the UAV images. Copyright © 2017 Elsevier Ltd. All rights reserved.
Syntactic Approach To Geometric Surface Shell Determination
NASA Astrophysics Data System (ADS)
DeGryse, Donald G.; Panton, Dale J.
1980-12-01
Autonomous terminal homing of a smart missile requires a stored reference scene of the target for which the missle is destined. The reference scene is produced from stereo source imagery by deriving a three-dimensional model containing cultural structures such as buildings, towers, bridges, and tanks. This model is obtained by the precise matching of cultural features from one image of the stereo pair to the other. In the past, this stereo matching process has relied heavily on local edge operators and a gray scale matching metric. The processing is performed line by line over the imagery and the amount of geometric control is minimal. As a result, the gross structure of the scene is determined but the derived three-dimensional data is noisy, oscillatory, and at times significantly inaccurate. This paper discusses new concepts that are currently being developed to stabilize this geometric reference preparation process. The new concepts involve the use of a structural syntax which will be used as a geometric constraint on automatic stereo matching. The syntax arises from the stereo configuration of the imaging platforms at the time of exposure and the knowledge of how various cultural structures are constructed. The syntax is used to parse a scene in terms of its cultural surfaces and to dictate to the matching process the allowable relative positions and orientations of surface edges in the image planes. Using the syntax, extensive searches using a gray scale matching metric are reduced.
Moore, David Steven
2015-05-10
This second edition of "Infrared and Raman Spectroscopic Imaging" propels practitioners in that wide-ranging field, as well as other readers, to the current state of the art in a well-produced and full-color, completely revised and updated, volume. This new edition chronicles the expanded application of vibrational spectroscopic imaging from yesterday's time-consuming point-by-point buildup of a hyperspectral image cube, through the improvements afforded by the addition of focal plane arrays and line scan imaging, to methods applicable beyond the diffraction limit, instructs the reader on the improved instrumentation and image and data analysis methods, and expounds on their application to fundamentalmore » biomedical knowledge, food and agricultural surveys, materials science, process and quality control, and many others.« less
High-speed video capillaroscopy method for imaging and evaluation of moving red blood cells
NASA Astrophysics Data System (ADS)
Gurov, Igor; Volkov, Mikhail; Margaryants, Nikita; Pimenov, Aleksei; Potemkin, Andrey
2018-05-01
The video capillaroscopy system with high image recording rate to resolve moving red blood cells with velocity up to 5 mm/s into a capillary is considered. Proposed procedures of the recorded video sequence processing allow evaluating spatial capillary area, capillary diameter and central line with high accuracy and reliability independently on properties of individual capillary. Two-dimensional inter frame procedure is applied to find lateral shift of neighbor images in the blood flow area with moving red blood cells and to measure directly the blood flow velocity along a capillary central line. The developed method opens new opportunities for biomedical diagnostics, particularly, due to long-time continuous monitoring of red blood cells velocity into capillary. Spatio-temporal representation of capillary blood flow is considered. Experimental results of direct measurement of blood flow velocity into separate capillary as well as capillary net are presented and discussed.
Handwritten text line segmentation by spectral clustering
NASA Astrophysics Data System (ADS)
Han, Xuecheng; Yao, Hui; Zhong, Guoqiang
2017-02-01
Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.
Sparsity-based multi-height phase recovery in holographic microscopy
NASA Astrophysics Data System (ADS)
Rivenson, Yair; Wu, Yichen; Wang, Hongda; Zhang, Yibo; Feizi, Alborz; Ozcan, Aydogan
2016-11-01
High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6-8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.
Shoulder Arthroplasty Imaging: What’s New
Gregory, T.M
2017-01-01
Background: Shoulder arthroplasty, in its different forms (hemiarthroplasty, total shoulder arthroplasty and reverse total shoulder arthroplasty) has transformed the clinical outcomes of shoulder disorders. Improvement of general clinical outcome is the result of stronger adequacy of the treatment to the diagnosis, enhanced surgical techniques, specific implanted materials, and more accurate follow up. Imaging is an important tool in each step of these processes. Method: This article is a review article declining recent imaging processes for shoulder arthroplasty. Results: Shoulder imaging is important for shoulder arthroplasty pre-operative planning but also for post-operative monitoring of the prosthesis and this article has a focus on the validity of plain radiographs for detecting radiolucent line and on new Computed Tomography scan method established to eliminate the prosthesis metallic artefacts that obscure the component fixation visualisation. Conclusion: Number of shoulder arthroplasties implanted have grown up rapidly for the past decade, leading to an increase in the number of complications. In parallel, new imaging system have been established to monitor these complications, especially component loosening PMID:29152007
Magnetosphere - ionosphere coupling process in the auroral region estimated from auroral tomography
NASA Astrophysics Data System (ADS)
Tanaka, Y.; Ogawa, Y.; Kadokura, A.; Gustavsson, B.; Kauristie, K.; Whiter, D. K.; Enell, C. F. T.; Brandstrom, U.; Sergienko, T.; Partamies, N.; Kozlovsky, A.; Miyaoka, H.; Kosch, M. J.
2016-12-01
We have studied the magnetosphere - ionosphere coupling process by using multiple auroral images and the ionospheric data obtained by a campaign observation with multi-point imagers and the EISCAT UHF radar in Northern Europe. We observed wavy structure of discrete arcs around the magnetic zenith at Tromso, Norway, from 22:00 to 23:15 UT on March 14, 2015, followed by auroral breakup, poleward expansion, and pulsating auroras. During this interval, the monochromatic (427.8nm) images were taken at a sampling interval of 2 seconds by three EMCCD imagers and at an interval of 10 seconds by totally six imagers. The EISCAT UHF radar at Tromso measured the ionospheric parameters along the magnetic field line from 20 to 24 UT. We applied the tomographic inversion technique to these data set to retrieve 3D distribution of the 427.8nm emission, that enabled us to obtain the following quantities for the auroras that change from moment to moment; (1) the relation between the 427.8nm emission and the electron density enhancement along the field line, (2) the horizontal distribution of energy flux of auroral precipitating electrons, and (3) the horizontal distribution of height-integrated ionospheric conductivity. By combining those with the ionospheric equivalent current estimated from the ground-based magnetometer network, we discuss the current system of a sequence of the auroral event in terms of the magnetosphere-ionosphere coupling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lugaz, N.; Shibata, K.; Downs, C.
We present a numerical investigation of the coronal evolution of a coronal mass ejection (CME) on 2005 August 22 using a three-dimensional thermodynamic magnetohydrodynamic model, the space weather modeling framework. The source region of the eruption was anemone active region (AR) 10798, which emerged inside a coronal hole. We validate our modeled corona by producing synthetic extreme-ultraviolet (EUV) images, which we compare to EIT images. We initiate the CME with an out-of-equilibrium flux rope with an orientation and chirality chosen in agreement with observations of an H{alpha} filament. During the eruption, one footpoint of the flux rope reconnects with streamermore » magnetic field lines and with open field lines from the adjacent coronal hole. It yields an eruption which has a mix of closed and open twisted field lines due to interchange reconnection and only one footpoint line-tied to the source region. Even with the large-scale reconnection, we find no evidence of strong rotation of the CME as it propagates. We study the CME deflection and find that the effect of the Lorentz force is a deflection of the CME by about 3{sup 0} R{sup -1}{sub sun} toward the east during the first 30 minutes of the propagation. We also produce coronagraphic and EUV images of the CME, which we compare with real images, identifying a dimming region associated with the reconnection process. We discuss the implication of our results for the arrival at Earth of CMEs originating from the limb and for models to explain the presence of open field lines in magnetic clouds.« less
Microscale Effects from Global Hot Plasma Imagery
NASA Technical Reports Server (NTRS)
Moore, T. E.; Fok, M.-C.; Perez, J. D.; Keady, J. P.
1995-01-01
We have used a three-dimensional model of recovery phase storm hot plasmas to explore the signatures of pitch angle distributions (PADS) in global fast atom imagery of the magnetosphere. The model computes mass, energy, and position-dependent PADs based on drift effects, charge exchange losses, and Coulomb drag. The hot plasma PAD strongly influences both the storm current system carried by the hot plasma and its time evolution. In turn, the PAD is strongly influenced by plasma waves through pitch angle diffusion, a microscale effect. We report the first simulated neutral atom images that account for anisotropic PADs within the hot plasma. They exhibit spatial distribution features that correspond directly to the PADs along the lines of sight. We investigate the use of image brightness distributions along tangent-shell field lines to infer equatorial PADS. In tangent-shell regions with minimal spatial gradients, reasonably accurate PADs are inferred from simulated images. They demonstrate the importance of modeling PADs for image inversion and show that comparisons of models with real storm plasma images will reveal the global effects of these microscale processes.
TESTS OF LOW-FREQUENCY GEOMETRIC DISTORTIONS IN LANDSAT 4 IMAGES.
Batson, R.M.; Borgeson, W.T.; ,
1985-01-01
Tests were performed to investigate the geometric characteristics of Landsat 4 images. The first set of tests was designed to determine the extent of image distortion caused by the physical process of writing the Landsat 4 images on film. The second was designed to characterize the geometric accuracies inherent in the digital images themselves. Test materials consisted of film images of test targets generated by the Laser Beam Recorders at Sioux Falls, the Optronics* Photowrite film writer at Goddard Space Flight Center, and digital image files of a strip 600 lines deep across the full width of band 5 of the Washington, D. C. Thematic Mapper scene. The tests were made by least-squares adjustment of an array of measured image points to a corresponding array of control points.
TM digital image products for applications. [computer compatible tapes
NASA Technical Reports Server (NTRS)
Barker, J. L.; Gunther, F. J.; Abrams, R. B.; Ball, D.
1984-01-01
The image characteristics of digital data generated by LANDSAT 4 thematic mapper (TM) are discussed. Digital data from the TM resides in tape files at various stages of image processing. Within each image data file, the image lines are blocked by a factor of either 5 for a computer compatible tape CCT-BT, or 4 for a CCT-AT and CCT-PT; in each format, the image file has a different format. Nominal geometric corrections which provide proper geodetic relationships between different parts of the image are available only for the CCT-PT. It is concluded that detector 3 of band 5 on the TM does not respond; this channel of data needs replacement. The empty bin phenomenon in CCT-AT images results from integer truncations of mixed-mode arithmetric operations.
Manufacturing of ArF chromeless hard shifter for 65-nm technology
NASA Astrophysics Data System (ADS)
Park, Keun-Taek; Dieu, Laurent; Hughes, Greg P.; Green, Kent G.; Croffie, Ebo H.; Taravade, Kunal N.
2003-12-01
For logic design, Chrome-less Phase Shift Mask is one of the possible solutions for defining small geometry with low MEF (mask enhancement factor) for the 65nm node. There have been lots of dedicated studies on the PCO (Phase Chrome Off-axis) mask technology and several design approaches have been proposed including grating background, chrome patches (or chrome shield) for applying PCO on line/space and contact pattern. In this paper, we studied the feasibility of grating design for line and contact pattern. The design of the grating pattern was provided from the EM simulation software (TEMPEST) and the aerial image simulation software. AIMS measurements with high NA annular illumination were done. Resist images were taken on designed pattern in different focus. Simulations, AIMS are compared to verify the consistency of the process with wafer printed performance.
Text-image alignment for historical handwritten documents
NASA Astrophysics Data System (ADS)
Zinger, S.; Nerbonne, J.; Schomaker, L.
2009-01-01
We describe our work on text-image alignment in context of building a historical document retrieval system. We aim at aligning images of words in handwritten lines with their text transcriptions. The images of handwritten lines are automatically segmented from the scanned pages of historical documents and then manually transcribed. To train automatic routines to detect words in an image of handwritten text, we need a training set - images of words with their transcriptions. We present our results on aligning words from the images of handwritten lines and their corresponding text transcriptions. Alignment based on the longest spaces between portions of handwriting is a baseline. We then show that relative lengths, i.e. proportions of words in their lines, can be used to improve the alignment results considerably. To take into account the relative word length, we define the expressions for the cost function that has to be minimized for aligning text words with their images. We apply right to left alignment as well as alignment based on exhaustive search. The quality assessment of these alignments shows correct results for 69% of words from 100 lines, or 90% of partially correct and correct alignments combined.
NASA Astrophysics Data System (ADS)
Jamlongkul, P.; Wannawichian, S.
2017-12-01
Earth's aurora in low latitude region was studied via time variations of oxygen emission spectra, simultaneously with solar wind data. The behavior of spectrum intensity, in corresponding with solar wind condition, could be a trace of aurora in low latitude region including some effects of high energetic auroral particles. Oxygen emission spectral lines were observed by Medium Resolution Echelle Spectrograph (MRES) at 2.4-m diameter telescope at Thai National Observatory, Inthanon Mountain, Chiang Mai, Thailand, during 1-5 LT on 5 and 6 February 2017. The observed spectral lines were calibrated via Dech95 - 2D image processing program and Dech-Fits spectra processing program for spectrum image processing and spectrum wavelength calibration, respectively. The variations of observed intensities each day were compared with solar wind parameters, which are magnitude of IMF (|BIMF|) including IMF in RTN coordinate (BR, BT, BN), ion density (ρ), plasma flow pressure (P), and speed (v). The correlation coefficients between oxygen spectral emissions and different solar wind parameters were found to vary in both positive and negative behaviors.
NASA Astrophysics Data System (ADS)
Zhang, Jingqiong; Zhang, Wenbiao; He, Yuting; Yan, Yong
2016-11-01
The amount of coke deposition on catalyst pellets is one of the most important indexes of catalytic property and service life. As a result, it is essential to measure this and analyze the active state of the catalysts during a continuous production process. This paper proposes a new method to predict the amount of coke deposition on catalyst pellets based on image analysis and soft computing. An image acquisition system consisting of a flatbed scanner and an opaque cover is used to obtain catalyst images. After imaging processing and feature extraction, twelve effective features are selected and two best feature sets are determined by the prediction tests. A neural network optimized by a particle swarm optimization algorithm is used to establish the prediction model of the coke amount based on various datasets. The root mean square error of the prediction values are all below 0.021 and the coefficient of determination R 2, for the model, are all above 78.71%. Therefore, a feasible, effective and precise method is demonstrated, which may be applied to realize the real-time measurement of coke deposition based on on-line sampling and fast image analysis.
Filtering of the Radon transform to enhance linear signal features via wavelet pyramid decomposition
NASA Astrophysics Data System (ADS)
Meckley, John R.
1995-09-01
The information content in many signal processing applications can be reduced to a set of linear features in a 2D signal transform. Examples include the narrowband lines in a spectrogram, ship wakes in a synthetic aperture radar image, and blood vessels in a medical computer-aided tomography scan. The line integrals that generate the values of the projections of the Radon transform can be characterized as a bank of matched filters for linear features. This localization of energy in the Radon transform for linear features can be exploited to enhance these features and to reduce noise by filtering the Radon transform with a filter explicitly designed to pass only linear features, and then reconstructing a new 2D signal by inverting the new filtered Radon transform (i.e., via filtered backprojection). Previously used methods for filtering the Radon transform include Fourier based filtering (a 2D elliptical Gaussian linear filter) and a nonlinear filter ((Radon xfrm)**y with y >= 2.0). Both of these techniques suffer from the mismatch of the filter response to the true functional form of the Radon transform of a line. The Radon transform of a line is not a point but is a function of the Radon variables (rho, theta) and the total line energy. This mismatch leads to artifacts in the reconstructed image and a reduction in achievable processing gain. The Radon transform for a line is computed as a function of angle and offset (rho, theta) and the line length. The 2D wavelet coefficients are then compared for the Haar wavelets and the Daubechies wavelets. These filter responses are used as frequency filters for the Radon transform. The filtering is performed on the wavelet pyramid decomposition of the Radon transform by detecting the most likely positions of lines in the transform and then by convolving the local area with the appropriate response and zeroing the pyramid coefficients outside of the response area. The response area is defined to contain 95% of the total wavelet coefficient energy. The detection algorithm provides an estimate of the line offset, orientation, and length that is then used to index the appropriate filter shape. Additional wavelet pyramid decomposition is performed in areas of high energy to refine the line position estimate. After filtering, the new Radon transform is generated by inverting the wavelet pyramid. The Radon transform is then inverted by filtered backprojection to produce the final 2D signal estimate with the enhanced linear features. The wavelet-based method is compared to both the Fourier and the nonlinear filtering with examples of sparse and dense shapes in imaging, acoustics and medical tomography with test images of noisy concentric lines, a real spectrogram of a blow fish (a very nonstationary spectrum), and the Shepp Logan Computer Tomography phantom image. Both qualitative and derived quantitative measures demonstrate the improvement of wavelet-based filtering. Additional research is suggested based on these results. Open questions include what level(s) to use for detection and filtering because multiple-level representations exist. The lower levels are smoother at reduced spatial resolution, while the higher levels provide better response to edges. Several examples are discussed based on analytical and phenomenological arguments.
A novel double patterning approach for 30nm dense holes
NASA Astrophysics Data System (ADS)
Hsu, Dennis Shu-Hao; Wang, Walter; Hsieh, Wei-Hsien; Huang, Chun-Yen; Wu, Wen-Bin; Shih, Chiang-Lin; Shih, Steven
2011-04-01
Double Patterning Technology (DPT) was commonly accepted as the major workhorse beyond water immersion lithography for sub-38nm half-pitch line patterning before the EUV production. For dense hole patterning, classical DPT employs self-aligned spacer deposition and uses the intersection of horizontal and vertical lines to define the desired hole patterns. However, the increase in manufacturing cost and process complexity is tremendous. Several innovative approaches have been proposed and experimented to address the manufacturing and technical challenges. A novel process of double patterned pillars combined image reverse will be proposed for the realization of low cost dense holes in 30nm node DRAM. The nature of pillar formation lithography provides much better optical contrast compared to the counterpart hole patterning with similar CD requirements. By the utilization of a reliable freezing process, double patterned pillars can be readily implemented. A novel image reverse process at the last stage defines the hole patterns with high fidelity. In this paper, several freezing processes for the construction of the double patterned pillars were tested and compared, and 30nm double patterning pillars were demonstrated successfully. A variety of different image reverse processes will be investigated and discussed for their pros and cons. An economic approach with the optimized lithography performance will be proposed for the application of 30nm DRAM node.
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
Community Tools for Cartographic and Photogrammetric Processing of Mars Express HRSC Images
NASA Astrophysics Data System (ADS)
Kirk, R. L.; Howington-Kraus, E.; Edmundson, K.; Redding, B.; Galuszka, D.; Hare, T.; Gwinner, K.
2017-07-01
The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged 77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was necessary to split observations into blocks of constant exposure time, greatly increasing the effort needed to control the images and collect DTMs. Here, we describe a substantially improved HRSC processing capability that incorporates sensor models with varying line timing in the current ISIS3 system (Sides 2017) and SOCET SET. This enormously reduces the work effort for processing most images and eliminates the artifacts that arose from segmenting them. In addition, the software takes advantage of the continuously evolving capabilities of ISIS3 and the improved image matching module NGATE (Next Generation Automatic Terrain Extraction, incorporating area and feature based algorithms, multi-image and multi-direction matching) of SOCET SET, thus greatly reducing the need for manual editing of DTM errors. We have also developed a procedure for geodetically controlling the images to Mars Orbiter Laser Altimeter (MOLA) data by registering a preliminary stereo topographic model to MOLA by using the point cloud alignment (pc_align) function of the NASA Ames Stereo Pipeline (ASP; Moratto et al. 2010). This effectively converts inter-image tiepoints into ground control points in the MOLA coordinate system. The result is improved absolute accuracy and a significant reduction in work effort relative to manual measurement of ground control. The ISIS and ASP software used are freely available; SOCET SET, is a commercial product. By the end of 2017 we expect to have ported our SOCET SET HRSC sensor model to the Community Sensor Model (CSM; Community Sensor Model Working Group 2010; Hare and Kirk 2017) standard utilized by the successor photogrammetric system SOCET GXP that is currently offered by BAE. In early 2018, we are also working with BAE to release the CSM source code under a BSD or MIT open source license. We illustrate current HRSC processing capabilities with three examples, of which the first two come from the DTM comparison of 2007. Candor Chasma (h1235_0001) was a near-periapse observation with constant exposure time that could be processed relatively easily at that time. We show qualitative and quantitative improvements in DTM resolution and precision as well as greatly reduced need for manual editing, and illustrate some of the photometric applications possible in ISIS. At the Nanedi Valles site we are now able to process all 3 long-arc orbits (h0894_0000, h0905_0000 and h0927_0000) without segmenting the images. Finally, processing image set h4235_0001, which covers the landing site of the Mars Science Laboratory (MSL) rover and its rugged science target of Aeolus Mons in Gale crater, provides a rare opportunity to evaluate DTM resolution and precision because extensive High Resolution Imaging Science Experiment (HiRISE) DTMs are available (Golombek et al. 2012). The HiRISE products have 50x smaller pixel scale so that discrepancies can mostly be attributed to HRSC. We use the HiRISE DTMs to compare the resolution and precision of our HRSC DTMs with the (evolving) standard products. We find that the vertical precision of HRSC DTMs is comparable to the pixel scale but the horizontal resolution may be 15-30 image pixels, depending on processing. This is significantly coarser than the lower limit of 3-5 pixels based on the minimum size for image patches to be matched. Stereo DTMs registered to MOLA altimetry by surface fitting typically deviate by 10thinsp;m or less in mean elevation. Estimates of the RMS deviation are strongly influenced by the sparse sampling of the altimetry, but range from
Hsieh, K S; Lin, C C; Liu, W S; Chen, F L
1996-01-01
Two-dimensional echocardiography had long been a standard diagnostic modality for congenital heart disease. Further attempts of three-dimensional reconstruction using two-dimensional echocardiographic images to visualize stereotypic structure of cardiac lesions have been successful only recently. So far only very few studies have been done to display three-dimensional anatomy of the heart through two-dimensional image acquisition because such complex procedures were involved. This study introduced a recently developed image acquisition and processing system for dynamic three-dimensional visualization of various congenital cardiac lesions. From December 1994 to April 1995, 35 cases were selected in the Echo Laboratory here from about 3000 Echo examinations completed. Each image was acquired on-line with specially designed high resolution image grazmber with EKG and respiratory gating technique. Off-line image processing using a window-architectured interactive software package includes construction of 2-D ehcocardiographic pixel to 3-D "voxel" with conversion of orthogonal to rotatory axial system, interpolation, extraction of region of interest, segmentation, shading and, finally, 3D rendering. Three-dimensional anatomy of various congenital cardiac defects was shown, including four cases with ventricular septal defects, two cases with atrial septal defects, and two cases with aortic stenosis. Dynamic reconstruction of a "beating heart" is recorded as vedio tape with video interface. The potential application of 3D display of the reconstruction from 2D echocardiographic images for the diagnosis of various congenital heart defects has been shown. The 3D display was able to improve the diagnostic ability of echocardiography, and clear-cut display of the various congenital cardiac defects and vavular stenosis could be demonstrated. Reinforcement of current techniques will expand future application of 3D display of conventional 2D images.
Producing a Linear Laser System for 3d Modelimg of Small Objects
NASA Astrophysics Data System (ADS)
Amini, A. Sh.; Mozaffar, M. H.
2012-07-01
Today, three dimensional modeling of objects is considered in many applications such as documentation of ancient heritage, quality control, reverse engineering and animation In this regard, there are a variety of methods for producing three-dimensional models. In this paper, a 3D modeling system is developed based on photogrammetry method using image processing and laser line extraction from images. In this method the laser beam profile is radiated on the body of the object and with video image acquisition, and extraction of laser line from the frames, three-dimensional coordinates of the objects can be achieved. In this regard, first the design and implementation of hardware, including cameras and laser systems was conducted. Afterwards, the system was calibrated. Finally, the software of the system was implemented for three dimensional data extraction. The system was investigated for modeling a number of objects. The results showed that the system can provide benefits such as low cost, appropriate speed and acceptable accuracy in 3D modeling of objects.
Autonomous target tracking of UAVs based on low-power neural network hardware
NASA Astrophysics Data System (ADS)
Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe
2014-05-01
Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.
Chosen results of field tests of synthetic aperture radar system installed on board UAV
NASA Astrophysics Data System (ADS)
Kaniewski, Piotr; Komorniczak, Wojciech; Lesnik, Czeslaw; Cyrek, Jacek; Serafin, Piotr; Labowski, Michal; Wajszczyk, Bronislaw
2017-04-01
The paper presents a synthetic information on a UAV-based radar terrain imaging system, its purpose, structure and working principle as well as terrain images obtained from flight experiments. A SAR technology demonstrator has been built as a result of a research project conducted by the Military University of Technology and WB Electronics S.A. under the name WATSAR. The developed system allows to obtain high resolution radar images, both in on-line and off-line modes, independently of the light conditions over the observed area. The software developed for the system allows to determine geographic coordinates of the imaged objects with high accuracy. Four LFM-CW radar sensors were built during the project: two for S band and two for Ku band, working with different signal bandwidths. Acquired signals were processed with the TDC algorithm, which allowed for a number of analyses in order to evaluate the performance of the system. The impact of the navigational corrections on a SAR image quality was assessed as well. The research methodology of the in-flight experiments of the system is presented in the paper. The projects results show that the developed system may be implemented as an aid to tactical C4ISR systems.
Khorasani, Milad; Amigo, José M; Sun, Changquan Calvin; Bertelsen, Poul; Rantanen, Jukka
2015-06-01
In the present study the application of near-infrared chemical imaging (NIR-CI) supported by chemometric modeling as non-destructive tool for monitoring and assessing the roller compaction and tableting processes was investigated. Based on preliminary risk-assessment, discussion with experts and current work from the literature the critical process parameter (roll pressure and roll speed) and critical quality attributes (ribbon porosity, granule size, amount of fines, tablet tensile strength) were identified and a design space was established. Five experimental runs with different process settings were carried out which revealed intermediates (ribbons, granules) and final products (tablets) with different properties. Principal component analysis (PCA) based model of NIR images was applied to map the ribbon porosity distribution. The ribbon porosity distribution gained from the PCA based NIR-CI was used to develop predictive models for granule size fractions. Predictive methods with acceptable R(2) values could be used to predict the granule particle size. Partial least squares regression (PLS-R) based model of the NIR-CI was used to map and predict the chemical distribution and content of active compound for both roller compacted ribbons and corresponding tablets. In order to select the optimal process, setting the standard deviation of tablet tensile strength and tablet weight for each tablet batch was considered. Strong linear correlation between tablet tensile strength and amount of fines and granule size was established, respectively. These approaches are considered to have a potentially large impact on quality monitoring and control of continuously operating manufacturing lines, such as roller compaction and tableting processes. Copyright © 2015 Elsevier B.V. All rights reserved.
Confocal non-line-of-sight imaging based on the light-cone transform
NASA Astrophysics Data System (ADS)
O’Toole, Matthew; Lindell, David B.; Wetzstein, Gordon
2018-03-01
How to image objects that are hidden from a camera’s view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
Confocal non-line-of-sight imaging based on the light-cone transform.
O'Toole, Matthew; Lindell, David B; Wetzstein, Gordon
2018-03-15
How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
Magnetic Resonance Imaging Studies of Process Rheology
1990-08-14
a Twin - screw Extruder ................ 7 2.1.2 NMR Flow Imaging Studies ................................... 7U 2.2 Theoretical Modeling ...run at high production rates, mixed in a 50.8 mm fully intermeshing, co - rotating twin - screw off-line techniques of quality control may lead to very...Imaging Studies of............... A -1 Mixing in a Twin - Screw Extruder " B. "Stokesian Dynamics Simulation of Polyether-coated
Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D
2012-07-01
Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.
Hall, Elise M.; Thurow, Brian S.; Guildenbecher, Daniel R.
2016-08-08
Digital in-line holography (DIH) and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with DIH. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and DIH successfully quantify the 3D nature of these particle fields. Furthermore, this includes measurement of the 3D particle position, individual particle sizes, and three-component velocity vectors. Formore » the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1–2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. In contrast, plenoptic imaging allows for a simpler experimental configuration and, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments.« less
Pulse-coupled neural network implementation in FPGA
NASA Astrophysics Data System (ADS)
Waldemark, Joakim T. A.; Lindblad, Thomas; Lindsey, Clark S.; Waldemark, Karina E.; Oberg, Johnny; Millberg, Mikael
1998-03-01
Pulse Coupled Neural Networks (PCNN) are biologically inspired neural networks, mainly based on studies of the visual cortex of small mammals. The PCNN is very well suited as a pre- processor for image processing, particularly in connection with object isolation, edge detection and segmentation. Several implementations of PCNN on von Neumann computers, as well as on special parallel processing hardware devices (e.g. SIMD), exist. However, these implementations are not as flexible as required for many applications. Here we present an implementation in Field Programmable Gate Arrays (FPGA) together with a performance analysis. The FPGA hardware implementation may be considered a platform for further, extended implementations and easily expanded into various applications. The latter may include advanced on-line image analysis with close to real-time performance.
NASA Astrophysics Data System (ADS)
Avbelj, Janja; Iwaszczuk, Dorota; Müller, Rupert; Reinartz, Peter; Stilla, Uwe
2015-02-01
For image fusion in remote sensing applications the georeferencing accuracy using position, attitude, and camera calibration measurements can be insufficient. Thus, image processing techniques should be employed for precise coregistration of images. In this article a method for multimodal object-based image coregistration refinement between hyperspectral images (HSI) and digital surface models (DSM) is presented. The method is divided in three parts: object outline detection in HSI and DSM, matching, and determination of transformation parameters. The novelty of our proposed coregistration refinement method is the use of material properties and height information of urban objects from HSI and DSM, respectively. We refer to urban objects as objects which are typical in urban environments and focus on buildings by describing them with 2D outlines. Furthermore, the geometric accuracy of these detected building outlines is taken into account in the matching step and for the determination of transformation parameters. Hence, a stochastic model is introduced to compute optimal transformation parameters. The feasibility of the method is shown by testing it on two aerial HSI of different spatial and spectral resolution, and two DSM of different spatial resolution. The evaluation is carried out by comparing the accuracies of the transformations parameters to the reference parameters, determined by considering object outlines at much higher resolution, and also by computing the correctness and the quality rate of the extracted outlines before and after coregistration refinement. Results indicate that using outlines of objects instead of only line segments is advantageous for coregistration of HSI and DSM. The extraction of building outlines in comparison to the line cue extraction provides a larger amount of assigned lines between the images and is more robust to outliers, i.e. false matches.
RVC-CAL library for endmember and abundance estimation in hyperspectral image analysis
NASA Astrophysics Data System (ADS)
Lazcano López, R.; Madroñal Quintín, D.; Juárez Martínez, E.; Sanz Álvaro, C.
2015-10-01
Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for instance, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. Thus, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization. In that line, this paper describes the construction of a new hyperspectral processing library for RVC-CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This paper presents the development of the required library functions to implement two of the four stages of the hyperspectral imaging processing chain--endmember and abundances estimation. The results obtained show that the library achieves speedups of 30%, approximately, comparing to an existing software of hyperspectral images analysis; concretely, the endmember estimation step reaches an average speedup of 27.6%, which saves almost 8 seconds in the execution time. It also shows the existence of some bottlenecks, as the communication interfaces among the different actors due to the volume of data to transfer. Finally, it is shown that the library considerably simplifies the implementation process. Thus, experimental results show the potential of a RVC-CAL library for analyzing hyperspectral images in real-time, as it provides enough resources to study the system performance.
Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration
NASA Astrophysics Data System (ADS)
Zhou, Jian; Qi, Jinyi
2011-03-01
Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.
Wave Phase-Sensitive Transformation of 3d-Straining of Mechanical Fields
NASA Astrophysics Data System (ADS)
Smirnov, I. N.; Speranskiy, A. A.
2015-11-01
It is the area of research of oscillatory processes in elastic mechanical systems. Technical result of innovation is creation of spectral set of multidimensional images which reflect time-correlated three-dimensional vector parameters of metrological, and\\or estimated, and\\or design parameters of oscillations in mechanical systems. Reconstructed images of different dimensionality integrated in various combinations depending on their objective function can be used as homeostatic profile or cybernetic image of oscillatory processes in mechanical systems for an objective estimation of current operational conditions in real time. The innovation can be widely used to enhance the efficiency of monitoring and research of oscillation processes in mechanical systems (objects) in construction, mechanical engineering, acoustics, etc. Concept method of vector vibrometry based on application of vector 3D phase- sensitive vibro-transducers permits unique evaluation of real stressed-strained states of power aggregates and loaded constructions and opens fundamental innovation opportunities: conduct of continuous (on-line regime) reliable monitoring of turboagregates of electrical machines, compressor installations, bases, supports, pipe-lines and other objects subjected to damaging effect of vibrations; control of operational safety of technical systems at all the stages of life cycle including design, test production, tuning, testing, operational use, repairs and resource enlargement; creation of vibro-diagnostic systems of authentic non-destructive control of anisotropic characteristics of materials resistance of power aggregates and loaded constructions under outer effects and operational flaws. The described technology is revolutionary, universal and common for all branches of engineering industry and construction building objects.
NASA Astrophysics Data System (ADS)
Moshavegh, Ramin; Hansen, Kristoffer Lindskov; Møller Sørensen, Hasse; Hemmsen, Martin Christian; Ewertsen, Caroline; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-04-01
This paper presents a novel automatic method for detection of B-lines (comet-tail artifacts) in lung ultrasound scans. B-lines are the most commonly used artifacts for analyzing the pulmonary edema. They appear as laser-like vertical beams, which arise from the pleural line and spread down without fading to the edge of the screen. An increase in their number is associated with presence of edema. All the scans used in this study were acquired using a BK3000 ultrasound scanner (BK Ultrasound, Denmark) driving a 192-element 5:5 MHz wide linear transducer (10L2W, BK Ultrasound). The dynamic received focus technique was employed to generate the sequences. Six subjects, among those three patients after major surgery and three normal subjects, were scanned once and Six ultrasound sequences each containing 50 frames were acquired. The proposed algorithm was applied to all 300 in-vivo lung ultrasound images. The pleural line is first segmented on each image and then the B-line artifacts spreading down from the pleural line are detected and overlayed on the image. The resulting 300 images showed that the mean lateral distance between B-lines detected on images acquired from patients decreased by 20% in compare with that of normal subjects. Therefore, the method can be used as the basis of a method of automatically and qualitatively characterizing the distribution of B-lines.
Processing the Viking lander camera data
NASA Technical Reports Server (NTRS)
Levinthal, E. C.; Tucker, R.; Green, W.; Jones, K. L.
1977-01-01
Over 1000 camera events were returned from the two Viking landers during the Primary Mission. A system was devised for processing camera data as they were received, in real time, from the Deep Space Network. This system provided a flexible choice of parameters for three computer-enhanced versions of the data for display or hard-copy generation. Software systems allowed all but 0.3% of the imagery scan lines received on earth to be placed correctly in the camera data record. A second-order processing system was developed which allowed extensive interactive image processing including computer-assisted photogrammetry, a variety of geometric and photometric transformations, mosaicking, and color balancing using six different filtered images of a common scene. These results have been completely cataloged and documented to produce an Experiment Data Record.
Recombination imaging of III-V solar cells
NASA Technical Reports Server (NTRS)
Virshup, G. F.
1987-01-01
An imaging technique based on the radiative recombination of minority carriers in forward-biased solar cells has been developed for characterization of III-V solar cells. When used in mapping whole wafers, it has helped identify three independent loss mechanisms (broken grid lines, shorting defects, and direct-to-indirect bandgap transitions), all of which resulted in lower efficiencies. The imaging has also led to improvements in processing techniques to reduce the occurrence of broken gridlines as well as surface defects. The ability to visualize current mechanisms in solar cells is an intuitive tool which is powerful in its simplicity.
NASA Astrophysics Data System (ADS)
Gao, Lingyu; Li, Xinghua; Guo, Qianrui; Quan, Jing; Hu, Zhengyue; Su, Zhikun; Zhang, Dong; Liu, Peilu; Li, Haopeng
2018-01-01
The internal structure of off-axis three-mirror system is commonly complex. The mirror installation error in assembly always affects the imaging line-of-sight and further degrades the image quality. Due to the complexity of the optical path in off-axis three-mirror optical system, the straightforward theoretical analysis on the variations of imaging line-of-sight is extremely difficult. In order to simplify the theoretical analysis, an equivalent single-mirror system is proposed and presented in this paper. In addition, the mathematical model of single-mirror system is established and the accurate expressions of imaging coordinate are derived. Utilizing the simulation software ZEMAX, off-axis three-mirror model and single-mirror model are both established. By adjusting the position of mirror and simulating the line-of-sight rotation of optical system, the variations of imaging coordinates are clearly observed. The final simulation results include: in off-axis three-mirror system, the varying sensitivity of the imaging coordinate to the rotation of line-of-sight is approximately 30 um/″; in single-mirror system, the varying sensitivity of the imaging coordinate to the rotation of line-of-sight is 31.5 um/″. Compared to the simulation results of the off-axis three-mirror model, the 5% relative error of single-mirror model analysis highly satisfies the requirement of equivalent analysis and also verifies its validity. This paper presents a new method to analyze the installation error of the mirror in the off-axis three-mirror system influencing on the imaging line-of-sight. Moreover, the off-axis three-mirror model is totally equivalent to the single-mirror model in theoretical analysis.
Advanced metrology by offline SEM data processing
NASA Astrophysics Data System (ADS)
Lakcher, Amine; Schneider, Loïc.; Le-Gratiet, Bertrand; Ducoté, Julien; Farys, Vincent; Besacier, Maxime
2017-06-01
Today's technology nodes contain more and more complex designs bringing increasing challenges to chip manufacturing process steps. It is necessary to have an efficient metrology to assess process variability of these complex patterns and thus extract relevant data to generate process aware design rules and to improve OPC models. Today process variability is mostly addressed through the analysis of in-line monitoring features which are often designed to support robust measurements and as a consequence are not always very representative of critical design rules. CD-SEM is the main CD metrology technique used in chip manufacturing process but it is challenged when it comes to measure metrics like tip to tip, tip to line, areas or necking in high quantity and with robustness. CD-SEM images contain a lot of information that is not always used in metrology. Suppliers have provided tools that allow engineers to extract the SEM contours of their features and to convert them into a GDS. Contours can be seen as the signature of the shape as it contains all the dimensional data. Thus the methodology is to use the CD-SEM to take high quality images then generate SEM contours and create a data base out of them. Contours are used to feed an offline metrology tool that will process them to extract different metrics. It was shown in two previous papers that it is possible to perform complex measurements on hotspots at different process steps (lithography, etch, copper CMP) by using SEM contours with an in-house offline metrology tool. In the current paper, the methodology presented previously will be expanded to improve its robustness and combined with the use of phylogeny to classify the SEM images according to their geometrical proximities.
Design of a Sensor System for On-Line Monitoring of Contact Pressure in Chalcographic Printing.
Jiménez, José Antonio; Meca, Francisco Javier; Santiso, Enrique; Martín, Pedro
2017-09-05
Chalcographic printer is the name given to a specific type of press which is used to transfer the printing of a metal-based engraved plate onto paper. The printing system consists of two rollers for pressing and carrying a metal plate onto which an engraved inked plate is placed. When the driving mechanism is operated, the pressure exerted by the rollers, also called contact pressure, allows the engraved image to be transferred into paper, thereby obtaining the final image. With the aim of ensuring the quality of the result, in terms of good and even transfer of ink, the contact pressure must be uniform. Nowadays, the strategies utilized to measure the pressure are implemented off-line, i.e., when the press machines are shut down for maintenance, which poses limitations. This paper proposes a novel sensor system aimed at monitoring the pressure exerted by the rollers on the engraved plate while chalcographic printer is operating, i.e., on-line. The purpose is two-fold: firstly, real-time monitoring reduces the number of breakdown repairs required, reduces machine downtime and reduces the number of low-quality engravings, which increases productivity and revenues; and secondly, the on-line monitoring and register of the process parameters allows the printing process to be reproducible even with changes in the environmental conditions or other factors such as the wear of the parts that constitute the mechanical system and a change in the dimensions of the printing materials. The proposed system consists of a strain gauge-based load cell and conditioning electronics to sense and treat the signals.
Design of a Sensor System for On-Line Monitoring of Contact Pressure in Chalcographic Printing
Jiménez, José Antonio; Meca, Francisco Javier; Santiso, Enrique; Martín, Pedro
2017-01-01
Chalcographic printer is the name given to a specific type of press which is used to transfer the printing of a metal-based engraved plate onto paper. The printing system consists of two rollers for pressing and carrying a metal plate onto which an engraved inked plate is placed. When the driving mechanism is operated, the pressure exerted by the rollers, also called contact pressure, allows the engraved image to be transferred into paper, thereby obtaining the final image. With the aim of ensuring the quality of the result, in terms of good and even transfer of ink, the contact pressure must be uniform. Nowadays, the strategies utilized to measure the pressure are implemented off-line, i.e., when the press machines are shut down for maintenance, which poses limitations. This paper proposes a novel sensor system aimed at monitoring the pressure exerted by the rollers on the engraved plate while chalcographic printer is operating, i.e., on-line. The purpose is two-fold: firstly, real-time monitoring reduces the number of breakdown repairs required, reduces machine downtime and reduces the number of low-quality engravings, which increases productivity and revenues; and secondly, the on-line monitoring and register of the process parameters allows the printing process to be reproducible even with changes in the environmental conditions or other factors such as the wear of the parts that constitute the mechanical system and a change in the dimensions of the printing materials. The proposed system consists of a strain gauge-based load cell and conditioning electronics to sense and treat the signals. PMID:28872583
NASA Technical Reports Server (NTRS)
Russell, C. T.
1978-01-01
Methods of timing magnetic substorms, the rapid fluctuations of aurorae, electromagnetic and electrostatic instabilities observed on the field lines of aurorae, the auroral microstructure, and the relationship of currents, electric field and particle precipitation to auroral form are discussed. Attention is given to such topics as D-perturbations as an indicator of substorm onset, the role of the magnetotail in substorms, spectral information derived from imaging data on aurorae, terrestrial kilometric radiation, and the importance of the mirror force in self-consistent models of particle fluxes, currents and potentials on auroral field lines.
Statistical mechanics of image processing by digital halftoning
NASA Astrophysics Data System (ADS)
Inoue, Jun-Ichi; Norimatsu, Wataru; Saika, Yohei; Okada, Masato
2009-03-01
We consider the problem of digital halftoning (DH). The DH is an image processing representing each grayscale in images in terms of black and white dots, and it is achieved by making use of the threshold dither mask, namely, each pixel is determined as black if the grayscale pixel is greater than or equal to the mask value and as white vice versa. To determine the mask for a given grayscale image, we assume that human-eyes might recognize the BW dots as the corresponding grayscale by linear filters. Then, the Hamiltonian is constructed as a distance between the original and recognized images which is written in terms of the mask. Finding the ground state of the Hamiltonian via deterministic annealing, we obtain the optimal mask and the BW dots simultaneously. From the spectrum analysis, we find that the BW dots are desirable from the view point of human-eyes modulation properties. We also show that the lower bound of the mean square error for the inverse process of the DH is minimized on the Nishimori line which is well-known in the research field of spin glasses.
Poka Yoke system based on image analysis and object recognition
NASA Astrophysics Data System (ADS)
Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.
2015-11-01
Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).
Power spectrum weighted edge analysis for straight edge detection in images
NASA Astrophysics Data System (ADS)
Karvir, Hrishikesh V.; Skipper, Julie A.
2007-04-01
Most man-made objects provide characteristic straight line edges and, therefore, edge extraction is a commonly used target detection tool. However, noisy images often yield broken edges that lead to missed detections, and extraneous edges that may contribute to false target detections. We present a sliding-block approach for target detection using weighted power spectral analysis. In general, straight line edges appearing at a given frequency are represented as a peak in the Fourier domain at a radius corresponding to that frequency, and a direction corresponding to the orientation of the edges in the spatial domain. Knowing the edge width and spacing between the edges, a band-pass filter is designed to extract the Fourier peaks corresponding to the target edges and suppress image noise. These peaks are then detected by amplitude thresholding. The frequency band width and the subsequent spatial filter mask size are variable parameters to facilitate detection of target objects of different sizes under known imaging geometries. Many military objects, such as trucks, tanks and missile launchers, produce definite signatures with parallel lines and the algorithm proves to be ideal for detecting such objects. Moreover, shadow-casting objects generally provide sharp edges and are readily detected. The block operation procedure offers advantages of significant reduction in noise influence, improved edge detection, faster processing speed and versatility to detect diverse objects of different sizes in the image. With Scud missile launcher replicas as target objects, the method has been successfully tested on terrain board test images under different backgrounds, illumination and imaging geometries with cameras of differing spatial resolution and bit-depth.
A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging
NASA Astrophysics Data System (ADS)
Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc
2015-06-01
High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.
Superresolved digital in-line holographic microscopy for high-resolution lensless biological imaging
NASA Astrophysics Data System (ADS)
Micó, Vicente; Zalevsky, Zeev
2010-07-01
Digital in-line holographic microscopy (DIHM) is a modern approach capable of achieving micron-range lateral and depth resolutions in three-dimensional imaging. DIHM in combination with numerical imaging reconstruction uses an extremely simplified setup while retaining the advantages provided by holography with enhanced capabilities derived from algorithmic digital processing. We introduce superresolved DIHM incoming from time and angular multiplexing of the sample spatial frequency information and yielding in the generation of a synthetic aperture (SA). The SA expands the cutoff frequency of the imaging system, allowing submicron resolutions in both transversal and axial directions. The proposed approach can be applied when imaging essentially transparent (low-concentration dilutions) and static (slow dynamics) samples. Validation of the method for both a synthetic object (U.S. Air Force resolution test) to quantify the resolution improvement and a biological specimen (sperm cells biosample) are reported showing the generation of high synthetic numerical aperture values working without lenses.
More About The Video Event Trigger
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1996-01-01
Report presents additional information about system described in "Video Event Trigger" (LEW-15076). Digital electronic system processes video-image data to generate trigger signal when image shows significant change, such as motion, or appearance, disappearance, change in color, brightness, or dilation of object. Potential uses include monitoring of hallways, parking lots, and other areas during hours when supposed unoccupied, looking for fires, tracking airplanes or other moving objects, identification of missing or defective parts on production lines, and video recording of automobile crash tests.
An AK-LDMeans algorithm based on image clustering
NASA Astrophysics Data System (ADS)
Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan
2018-03-01
Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.
Murayama, Kodai; Ishikawa, Daitaro; Genkawa, Takuma; Sugino, Hiroyuki; Komiyama, Makoto; Ozaki, Yukihiro
2015-03-03
In the present study we have developed a new version (ND-NIRs) of a polychromator-type near-infrared (NIR) spectrometer with a high-resolution photo diode array detector, which we built before (D-NIRs). The new version has four 5 W halogen lamps compared with the three lamps for the older version. The new version also has a condenser lens with a shorter focal point length. The increase in the number of the lamps and the shortening of the focal point of the condenser lens realize high signal-to-noise ratio and high-speed NIR imaging measurement. By using the ND-NIRs we carried out the in-line monitoring of pharmaceutical blending and determined an end point of the blending process. Moreover, to determinate a more accurate end point, a NIR image of the blending sample was acquired by means of a portable NIR imaging device based on ND-NIRs. The imaging result has demonstrated that the mixing time of 8 min is enough for homogeneous mixing. In this way the present study has demonstrated that ND-NIRs and the imaging system based on a ND-NIRs hold considerable promise for process analysis.
0.35-μm excimer DUV photolithography process
NASA Astrophysics Data System (ADS)
Arugu, Donald O.; Green, Kent G.; Nunan, Peter D.; Terbeek, Marcel; Crank, Sue E.; Ta, Lam; Capsuto, Elliott S.; Sethi, Satyendra S.
1993-08-01
It is becoming increasingly clear that DUV excimer laser based imaging will be one of the technologies for printing sub-half micron devices. This paper reports the investigation of 0.35 micrometers photolithography process using chemically amplified DUV resists on organic anti- reflective coating (ARC). Production data from the GCA XLS excimer DUV tools with nominal gate width of 0.35 micrometers lines, 0.45 micrometers spaces was studied to demonstrate device production worthiness. This data included electrical yield information for device characterization. Exposure overlay was done by mixing and matching DUV and I-line GCA steppers for critical and non critical levels respectively. Working isolated transistors down to 0.2 micrometers have been demonstrated.
NASA Astrophysics Data System (ADS)
Kesiman, Made Windu Antara; Valy, Dona; Burie, Jean-Christophe; Paulus, Erick; Sunarya, I. Made Gede; Hadi, Setiawan; Sok, Kim Heng; Ogier, Jean-Marc
2017-01-01
Due to their specific characteristics, palm leaf manuscripts provide new challenges for text line segmentation tasks in document analysis. We investigated the performance of six text line segmentation methods by conducting comparative experimental studies for the collection of palm leaf manuscript images. The image corpus used in this study comes from the sample images of palm leaf manuscripts of three different Southeast Asian scripts: Balinese script from Bali and Sundanese script from West Java, both from Indonesia, and Khmer script from Cambodia. For the experiments, four text line segmentation methods that work on binary images are tested: the adaptive partial projection line segmentation approach, the A* path planning approach, the shredding method, and our proposed energy function for shredding method. Two other methods that can be directly applied on grayscale images are also investigated: the adaptive local connectivity map method and the seam carving-based method. The evaluation criteria and tool provided by ICDAR2013 Handwriting Segmentation Contest were used in this experiment.
Dehazed Image Quality Assessment by Haze-Line Theory
NASA Astrophysics Data System (ADS)
Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai
2017-06-01
Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.
NASA Astrophysics Data System (ADS)
Park, Jonghee; Yoon, Kuk-Jin
2015-02-01
We propose a real-time line matching method for stereo systems. To achieve real-time performance while retaining a high level of matching precision, we first propose a nonparametric transform to represent the spatial relations between neighboring lines and nearby textures as a binary stream. Since the length of a line can vary across images, the matching costs between lines are computed within an overlap area (OA) based on the binary stream. The OA is determined for each line pair by employing the properties of a rectified image pair. Finally, the line correspondence is determined using a winner-takes-all method with a left-right consistency check. To reduce the computational time requirements further, we filter out unreliable matching candidates in advance based on their rectification properties. The performance of the proposed method was compared with state-of-the-art methods in terms of the computational time, matching precision, and recall. The proposed method required 47 ms to match lines from an image pair in the KITTI dataset with an average precision of 95%. We also verified the proposed method under image blur, illumination variation, and viewpoint changes.
Hyperspectral Fluorescence and Reflectance Imaging Instrument
NASA Technical Reports Server (NTRS)
Ryan, Robert E.; O'Neal, S. Duane; Lanoue, Mark; Russell, Jeffrey
2008-01-01
The system is a single hyperspectral imaging instrument that has the unique capability to acquire both fluorescence and reflectance high-spatial-resolution data that is inherently spatially and spectrally registered. Potential uses of this instrument include plant stress monitoring, counterfeit document detection, biomedical imaging, forensic imaging, and general materials identification. Until now, reflectance and fluorescence spectral imaging have been performed by separate instruments. Neither a reflectance spectral image nor a fluorescence spectral image alone yields as much information about a target surface as does a combination of the two modalities. Before this system was developed, to benefit from this combination, analysts needed to perform time-consuming post-processing efforts to co-register the reflective and fluorescence information. With this instrument, the inherent spatial and spectral registration of the reflectance and fluorescence images minimizes the need for this post-processing step. The main challenge for this technology is to detect the fluorescence signal in the presence of a much stronger reflectance signal. To meet this challenge, the instrument modulates artificial light sources from ultraviolet through the visible to the near-infrared part of the spectrum; in this way, both the reflective and fluorescence signals can be measured through differencing processes to optimize fluorescence and reflectance spectra as needed. The main functional components of the instrument are a hyperspectral imager, an illumination system, and an image-plane scanner. The hyperspectral imager is a one-dimensional (line) imaging spectrometer that includes a spectrally dispersive element and a two-dimensional focal plane detector array. The spectral range of the current imaging spectrometer is between 400 to 1,000 nm, and the wavelength resolution is approximately 3 nm. The illumination system consists of narrowband blue, ultraviolet, and other discrete wavelength light-emitting-diode (LED) sources and white-light LED sources designed to produce consistently spatially stable light. White LEDs provide illumination for the measurement of reflectance spectra, while narrowband blue and UV LEDs are used to excite fluorescence. Each spectral type of LED can be turned on or off depending on the specific remote-sensing process being performed. Uniformity of illumination is achieved by using an array of LEDs and/or an integrating sphere or other diffusing surface. The image plane scanner uses a fore optic with a field of view large enough to provide an entire scan line on the image plane. It builds up a two-dimensional image in pushbroom fashion as the target is scanned across the image plane either by moving the object or moving the fore optic. For fluorescence detection, spectral filtering of a narrowband light illumination source is sometimes necessary to minimize the interference of the source spectrum wings with the fluorescence signal. Spectral filtering is achieved with optical interference filters and absorption glasses. This dual spectral imaging capability will enable the optimization of reflective, fluorescence, and fused datasets as well as a cost-effective design for multispectral imaging solutions. This system has been used in plant stress detection studies and in currency analysis.
CRT image recording evaluation
NASA Technical Reports Server (NTRS)
1971-01-01
Performance capabilities and limitations of a fiber optic coupled line scan CRT image recording system were investigated. The test program evaluated the following components: (1). P31 phosphor CRT with EMA faceplate; (2). P31 phosphor CRT with clear clad faceplate; (3). Type 7743 semi-gloss dry process positive print paper; (4). Type 777 flat finish dry process positive print paper; (5). Type 7842 dry process positive film; and (6). Type 1971 semi-gloss wet process positive print paper. Detailed test procedures used in each test are provided along with a description of each test, the test data, and an analysis of the results.
Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen
2014-04-01
In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.
Fundamental performance differences of CMOS and CCD imagers: part V
NASA Astrophysics Data System (ADS)
Janesick, James R.; Elliott, Tom; Andrews, James; Tower, John; Pinter, Jeff
2013-02-01
Previous papers delivered over the last decade have documented developmental progress made on large pixel scientific CMOS imagers that match or surpass CCD performance. New data and discussions presented in this paper include: 1) a new buried channel CCD fabricated on a CMOS process line, 2) new data products generated by high performance custom scientific CMOS 4T/5T/6T PPD pixel imagers, 3) ultimate CTE and speed limits for large pixel CMOS imagers, 4) fabrication and test results of a flight 4k x 4k CMOS imager for NRL's SoloHi Solar Orbiter Mission, 5) a progress report on ultra large stitched Mk x Nk CMOS imager, 6) data generated by on-chip sub-electron CDS signal chain circuitry used in our imagers, 7) CMOS and CMOSCCD proton and electron radiation damage data for dose levels up to 10 Mrd, 8) discussions and data for a new class of PMOS pixel CMOS imagers and 9) future CMOS development work planned.
Choosing face: The curse of self in profile image selection.
White, David; Sutherland, Clare A M; Burton, Amy L
2017-01-01
People draw automatic social inferences from photos of unfamiliar faces and these first impressions are associated with important real-world outcomes. Here we examine the effect of selecting online profile images on first impressions. We model the process of profile image selection by asking participants to indicate the likelihood that images of their own face ("self-selection") and of an unfamiliar face ("other-selection") would be used as profile images on key social networking sites. Across two large Internet-based studies (n = 610), in line with predictions, image selections accentuated favorable social impressions and these impressions were aligned to the social context of the networking sites. However, contrary to predictions based on people's general expertise in self-presentation, other-selected images conferred more favorable impressions than self-selected images. We conclude that people make suboptimal choices when selecting their own profile pictures, such that self-perception places important limits on facial first impressions formed by others. These results underscore the dynamic nature of person perception in real-world contexts.
HVS: an image-based approach for constructing virtual environments
NASA Astrophysics Data System (ADS)
Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao
1998-09-01
Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.
NASA Astrophysics Data System (ADS)
Kamangir, H.; Momeni, M.; Satari, M.
2017-09-01
This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.
Technology for Elevated Temperature Tests of Structural Panels
NASA Technical Reports Server (NTRS)
Thornton, E. A.
1999-01-01
A technique for full-field measurement of surface temperature and in-plane strain using a single grid imaging technique was demonstrated on a sample subjected to thermally-induced strain. The technique is based on digital imaging of a sample marked by an alternating line array of La2O2S:Eu(+3) thermographic phosphor and chromium illuminated by a UV lamp. Digital images of this array in unstrained and strained states were processed using a modified spin filter. Normal strain distribution was determined by combining unstrained and strained grid images using a single grid digital moire technique. Temperature distribution was determined by ratioing images of phosphor intensity at two wavelengths. Combined strain and temperature measurements demonstrated on the thermally heated sample were DELTA-epsilon = +/- 250 microepsilon and DELTA-T = +/- 5 K respectively with a spatial resolution of 0.8 mm.
Visual detection of particulates in processed meat products by x ray
NASA Astrophysics Data System (ADS)
Schatzki, Thomas F.; Young, Richard; Haff, Ron P.; Eye, J.; Wright, G.
1995-01-01
A test has been run to study the efficacy of detecting particulate contaminants in processed meat samples by manual observation of line-scanned x-ray images. Six hundred processed product samples arriving over a 3 month period at a national USDA-FSIS laboratory were scanned at 230 cm2sec with 0.5 X 0.5 mm resolution, using 50 KV, 13 ma excitation, with digital interfacing and image correction. Images were inspected off-line, using interactive image enhancement. Forty percent of the samples were spiked, blind to the analyst, in order to establish the manual recognition rate as a function of sample thickness [1 - 10 cm] and texture of the x-ray image [smooth/textured], as well as spike composition [wood/bone/glass], size [1 - 4 mm] and shape [splinter/round]. The results have been analyzed using maximum likelihood logistic regression. In meat packages less than 6 cm thick, 2 mm bone chips are easily recognized, 1 mm glass splinters with some difficulty, while wood is generally missed even at 4 mm. Operational feasibility in a time-constrained setting has bee confirmed. One half percent of the samples arriving from the field contained bone slivers > 1 cm long, one half percent contained metallic material, while 4% contained particulates exceeding 3.2 mm in size. All of the latter appeared to be bone fragments.
Development of on line automatic separation device for apple and sleeve
NASA Astrophysics Data System (ADS)
Xin, Dengke; Ning, Duo; Wang, Kangle; Han, Yuhang
2018-04-01
Based on STM32F407 single chip microcomputer as control core, automatic separation device of fruit sleeve is designed. This design consists of hardware and software. In hardware, it includes mechanical tooth separator and three degree of freedom manipulator, as well as industrial control computer, image data acquisition card, end effector and other structures. The software system is based on Visual C++ development environment, to achieve localization and recognition of fruit sleeve with the technology of image processing and machine vision, drive manipulator of foam net sets of capture, transfer, the designated position task. Test shows: The automatic separation device of the fruit sleeve has the advantages of quick response speed and high separation success rate, and can realize separation of the apple and plastic foam sleeve, and lays the foundation for further studying and realizing the application of the enterprise production line.
Artese, Serena; Achilli, Vladimiro; Zinno, Raffaele
2018-01-01
Deck inclination and vertical displacements are among the most important technical parameters to evaluate the health status of a bridge and to verify its bearing capacity. Several methods, both conventional and innovative, are used for structural rotations and displacement monitoring; however, none of these allow, at the same time, precision, automation, static and dynamic monitoring without using high cost instrumentation. The proposed system uses a common laser pointer and image processing. The elastic line inclination is measured by analyzing the single frames of an HD video of the laser beam imprint projected on a flat target. For the image processing, a code was developed in Matlab® that provides instantaneous rotation and displacement of a bridge, charged by a mobile load. An important feature is the synchronization of the load positioning, obtained by a GNSS receiver or by a video. After the calibration procedures, a test was carried out during the movements of a heavy truck maneuvering on a bridge. Data acquisition synchronization allowed us to relate the position of the truck on the deck to inclination and displacements. The inclination of the elastic line at the support was obtained with a precision of 0.01 mrad. The results demonstrate the suitability of the method for dynamic load tests, and the control and monitoring of bridges. PMID:29370082
Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope.
Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T C
2015-10-01
Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.
Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope
NASA Astrophysics Data System (ADS)
Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.
2015-10-01
Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.
Lithographic performance of recent DUV photoresists
NASA Astrophysics Data System (ADS)
Streefkerk, Bob; van Ingen Schenau, Koen; Buijk, Corine
1998-06-01
Commercially available photoresists from the major photoresist vendors are investigated using a PAS 5500/300 wafer stepper, a 31.1 mm diameter field size high throughput wafer stepper with variable NA capability up to 0.63. The critical dimension (CD) investigated is 0.25 micrometers and lower for dense and isolated lines and 0.25 micrometers for dense contact holes. The photoresist process performance is quantified by measuring exposure-defocus windows for a specific resolution using a CD SEM. Photoresists that are comparable with or better than APEX-E with RTC top coat, which is the current base line process for lines and spaces imaging performance, are Clariant AZ-DX1300 and Shin Etsu SEPR-4103PB50. Most recent photoresists have much improved delay performance when compared to APEX without top coat. Improvement, when an organic BARC is applied, depends on the actual photoresist characteristics. The optimal photoresist found for 0.25 micrometers contact holes is TOK DP015 C. This process operates at optimal conditions.
Simulation of void formation in interconnect lines
NASA Astrophysics Data System (ADS)
Sheikholeslami, Alireza; Heitzinger, Clemens; Puchner, Helmut; Badrieh, Fuad; Selberherr, Siegfried
2003-04-01
The predictive simulation of the formation of voids in interconnect lines is important for improving capacitance and timing in current memory cells. The cells considered are used in wireless applications such as cell phones, pagers, radios, handheld games, and GPS systems. In backend processes for memory cells, ILD (interlayer dielectric) materials and processes result in void formation during gap fill. This approach lowers the overall k-value of a given metal layer and is economically advantageous. The effect of the voids on the overall capacitive load is tremendous. In order to simulate the shape and positions of the voids and thus the overall capacitance, the topography simulator ELSA (Enhanced Level Set Applications) has been developed which consists of three modules, a level set module, a radiosity module, and a surface reaction module. The deposition process considered is deposition of silicon nitride. Test structures of interconnect lines of memory cells were fabricated and several SEM images thereof were used to validate the corresponding simulations.
NASA Astrophysics Data System (ADS)
Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.
2006-09-01
Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.
Bayesian Analysis of Hmi Images and Comparison to Tsi Variations and MWO Image Observables
NASA Astrophysics Data System (ADS)
Parker, D. G.; Ulrich, R. K.; Beck, J.; Tran, T. V.
2015-12-01
We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from June, 2010 to December, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables.The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment.Ulrich, R.K., Parker, D, Bertello, L. and Boyden, J. 2010, Solar Phys. , 261 , 11.
An IUE survey of activity in red giants and supergiants
NASA Technical Reports Server (NTRS)
Oznovich, I.; Gibson, D. M.
1987-01-01
Chromospheric and transition region line activity is examined in apparently single red giants and supergiants using the IUE archives. Low-resolution, large-aperture spectra (mostly short-wavelength) were used to search for variations of emission-line fluxes in time. A series of automatic processing procedures were implemented in order to uniformly calibrate a large number of spectra, fit continua to each of them, determine the fluxes of as many as 18 emission lines, and compare them at different epochs. A method is offered to compute the overall error in the integrated flux, a critical measure of activity, independent of the observing and processing details. This processing was applied to above 120 images of 26 stars taken over a period of 7 yr (1978-1984). Four stars showed UV emission-line flux variations. Alpha Aqr, Beta Peg, and Sigma Oph showed a single enhanced-emission event in all detectable emission lines. Gamma Aql exhibited an increase in the flux level of the O I (1641 A) line in mid-1981 with no comparable change in any other lines. These four stars lie in a region of the H-R diagram in which time-dependent circumstellar absorption lines appear.
Li, Ronny X.; Qaqish, William; Konofagou, Elisa. E.
2015-01-01
The propagation behavior of the arterial pulse wave may provide valuable diagnostic information for cardiovascular pathology. Pulse Wave Imaging (PWI) is a noninvasive, ultrasound imaging-based technique capable of mapping multiple wall motion waveforms along a short arterial segment over a single cardiac cycle, allowing for the regional pulse wave velocity (PWV) and propagation uniformity to be evaluated. The purpose of this study was to improve the clinical utility of PWI using a conventional ultrasound system. The tradeoff between PWI spatial and temporal resolution was evaluated using an ex vivo canine aorta (n = 2) setup to assess the effects of varying image acquisition and signal processing parameters on the measurement of the PWV and the pulse wave propagation uniformity r2. PWI was also performed on the carotid arteries and abdominal aortas of 10 healthy volunteers (24.8 ± 3.3 y.o.) to determine the waveform tracking feature that would yield the most precise PWV measurements and highest r2 values in vivo. The ex vivo results indicated that the highest precision for measuring PWVs ~ 2.5 – 3.5 m/s was achieved using 24–48 scan lines within a 38 mm image plane width (i.e. 0.63 – 1.26 lines/mm). The in vivo results indicated that tracking the 50% upstroke of the waveform would consistently yield the most precise PWV measurements and minimize the error in the propagation uniformity measurement. Such findings may help establish the optimal image acquisition and signal processing parameters that may improve the reliability of PWI as a clinical measurement tool. PMID:26640603
NASA Astrophysics Data System (ADS)
Berthon, Beatrice; Dansette, Pierre-Marc; Tanter, Mickaël; Pernot, Mathieu; Provost, Jean
2017-07-01
Direct imaging of the electrical activation of the heart is crucial to better understand and diagnose diseases linked to arrhythmias. This work presents an ultrafast acoustoelectric imaging (UAI) system for direct and non-invasive ultrafast mapping of propagating current densities using the acoustoelectric effect. Acoustoelectric imaging is based on the acoustoelectric effect, the modulation of the medium’s electrical impedance by a propagating ultrasonic wave. UAI triggers this effect with plane wave emissions to image current densities. An ultrasound research platform was fitted with electrodes connected to high common-mode rejection ratio amplifiers and sampled by up to 128 independent channels. The sequences developed allow for both real-time display of acoustoelectric maps and long ultrafast acquisition with fast off-line processing. The system was evaluated by injecting controlled currents into a saline pool via copper wire electrodes. Sensitivity to low current and low acoustic pressure were measured independently. Contrast and spatial resolution were measured for varying numbers of plane waves and compared to line per line acoustoelectric imaging with focused beams at equivalent peak pressure. Temporal resolution was assessed by measuring time-varying current densities associated with sinusoidal currents. Complex intensity distributions were also imaged in 3D. Electrical current densities were detected for injected currents as low as 0.56 mA. UAI outperformed conventional focused acoustoelectric imaging in terms of contrast and spatial resolution when using 3 and 13 plane waves or more, respectively. Neighboring sinusoidal currents with opposed phases were accurately imaged and separated. Time-varying currents were mapped and their frequency accurately measured for imaging frame rates up to 500 Hz. Finally, a 3D image of a complex intensity distribution was obtained. The results demonstrated the high sensitivity of the UAI system proposed. The plane wave based approach provides a highly flexible trade-off between frame rate, resolution and contrast. In conclusion, the UAI system shows promise for non-invasive, direct and accurate real-time imaging of electrical activation in vivo.
Distributed data collection for a database of radiological image interpretations
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.
1997-01-01
The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.
Ground-to-air flow visualization using Solar Calcium-K line Background-Oriented Schlieren
NASA Astrophysics Data System (ADS)
Hill, Michael A.; Haering, Edward A.
2017-01-01
The Calcium-K Eclipse Background-Oriented Schlieren experiment was performed as a proof of concept test to evaluate the effectiveness of using the solar disk as a background to perform the Background-Oriented Schlieren (BOS) method of flow visualization. A ground-based imaging system was equipped with a Calcium-K line optical etalon filter to enable the use of the chromosphere of the sun as the irregular background to be used for BOS. A US Air Force T-38 aircraft performed three supersonic runs which eclipsed the sun as viewed from the imaging system. The images were successfully post-processed using optical flow methods to qualitatively reveal the density gradients in the flow around the aircraft.
CHROMOSPHERIC EVAPORATION IN AN X1.0 FLARE ON 2014 MARCH 29 OBSERVED WITH IRIS AND EIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y.; Ding, M. D.; Qiu, J.
Chromospheric evaporation refers to dynamic mass motions in flare loops as a result of rapid energy deposition in the chromosphere. These motions have been observed as blueshifts in X-ray and extreme-ultraviolet (EUV) spectral lines corresponding to upward motions at a few tens to a few hundreds of km s{sup −1}. Past spectroscopic observations have also revealed a dominant stationary component, in addition to the blueshifted component, in emission lines formed at high temperatures (∼10 MK). This is contradictory to evaporation models predicting predominant blueshifts in hot lines. The recently launched Interface Region Imaging Spectrograph (IRIS) provides high-resolution imaging and spectroscopicmore » observations that focus on the chromosphere and transition region in the UV passband. Using the new IRIS observations, combined with coordinated observations from the EUV Imaging Spectrometer, we study the chromospheric evaporation process from the upper chromosphere to the corona during an X1.0 flare on 2014 March 29. We find evident evaporation signatures, characterized by Doppler shifts and line broadening, at two flare ribbons that are separating from each other, suggesting that chromospheric evaporation takes place in successively formed flaring loops throughout the flare. More importantly, we detect dominant blueshifts in the high-temperature Fe xxi line (∼10 MK), in agreement with theoretical predictions. We also find that, in this flare, gentle evaporation occurs at some locations in the rise phase of the flare, while explosive evaporation is detected at some other locations near the peak of the flare. There is a conversion from gentle to explosive evaporation as the flare evolves.« less
Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang
2012-09-01
Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding.
Investigation of autofocus algorithms for brightfield microscopy of unstained cells
NASA Astrophysics Data System (ADS)
Wu, Shu Yu; Dugan, Nazim; Hennelly, Bryan M.
2014-05-01
In the past decade there has been significant interest in image processing for brightfield cell microscopy. Much of the previous research on image processing for microscopy has focused on fluorescence microscopy, including cell counting, cell tracking, cell segmentation and autofocusing. Fluorescence microscopy provides functional image information that involves the use of labels in the form of chemical stains or dyes. For some applications, where the biochemical integrity of the cell is required to remain unchanged so that sensitive chemical testing can later be applied, it is necessary to avoid staining. For this reason the challenge of processing images of unstained cells has become a topic of increasing attention. These cells are often effectively transparent and appear to have a homogenous intensity profile when they are in focus. Bright field microscopy is the most universally available and most widely used form of optical microscopy and for this reason we are interested in investigating image processing of unstained cells recorded using a standard bright field microscope. In this paper we investigate the application of a range of different autofocus metrics applied to unstained bladder cancer cell lines using a standard inverted bright field microscope with microscope objectives that have high magnification and numerical aperture. We present a number of conclusions on the optimum metrics and the manner in which they should be applied for this application.
NASA Astrophysics Data System (ADS)
Kuo, Hung-Fei; Kao, Guan-Hsuan; Zhu, Liang-Xiu; Hung, Kuo-Shu; Lin, Yu-Hsin
2018-02-01
This study used a digital micromirror device (DMD) to produce point-array patterns and employed a self-developed optical system to define line-and-space patterns on nonplanar substrates. First, field tracing was employed to analyze the aerial images of the lithographic system, which comprised an optical system and the DMD. Multiobjective particle swarm optimization was then applied to determine the spot overlapping rate used. The objective functions were set to minimize linewidth and maximize image log slope, through which the dose of the exposure agent could be effectively controlled and the quality of the nonplanar lithography could be enhanced. Laser beams with 405-nm wavelength were employed as the light source. Silicon substrates coated with photoresist were placed on a nonplanar translation stage. The DMD was used to produce lithographic patterns, during which the parameters were analyzed and optimized. The optimal delay time-sequence combinations were used to scan images of the patterns. Finally, an exposure linewidth of less than 10 μm was successfully achieved using the nonplanar lithographic process.
A technique for transferring a patient's smile line to a cone beam computed tomography (CBCT) image.
Bidra, Avinash S
2014-08-01
Fixed implant-supported prosthodontic treatment for patients requiring a gingival prosthesis often demands that bone and implant levels be apical to the patient's maximum smile line. This is to avoid the display of the prosthesis-tissue junction (the junction between the gingival prosthesis and natural soft tissues) and prevent esthetic failures. Recording a patient's lip position during maximum smile is invaluable for the treatment planning process. This article presents a simple technique for clinically recording and transferring the patient's maximum smile line to cone beam computed tomography (CBCT) images for analysis. The technique can help clinicians accurately determine the need for and amount of bone reduction required with respect to the maximum smile line and place implants in optimal positions. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Stewart, Ethan L; Hagerty, Christina H; Mikaberidze, Alexey; Mundt, Christopher C; Zhong, Ziming; McDonald, Bruce A
2016-07-01
Zymoseptoria tritici causes Septoria tritici blotch (STB) on wheat. An improved method of quantifying STB symptoms was developed based on automated analysis of diseased leaf images made using a flatbed scanner. Naturally infected leaves (n = 949) sampled from fungicide-treated field plots comprising 39 wheat cultivars grown in Switzerland and 9 recombinant inbred lines (RIL) grown in Oregon were included in these analyses. Measures of quantitative resistance were percent leaf area covered by lesions, pycnidia size and gray value, and pycnidia density per leaf and lesion. These measures were obtained automatically with a batch-processing macro utilizing the image-processing software ImageJ. All phenotypes in both locations showed a continuous distribution, as expected for a quantitative trait. The trait distributions at both sites were largely overlapping even though the field and host environments were quite different. Cultivars and RILs could be assigned to two or more statistically different groups for each measured phenotype. Traditional visual assessments of field resistance were highly correlated with quantitative resistance measures based on image analysis for the Oregon RILs. These results show that automated image analysis provides a promising tool for assessing quantitative resistance to Z. tritici under field conditions.
A submersible digital in-line holographic microscope
NASA Astrophysics Data System (ADS)
Jericho, Manfred; Jericho, Stefan; Kreuzer, Hans Juergen; Garcia, Jeorge; Klages, Peter
Few instruments exist that can image microscopic marine organisms in their natural environment so that their locomotion mechanisms, feeding habits and interactions with surfaces, such as bio-fouling, can be investigated in situ. In conventional optical microscopy under conditions of high magnification, only objects confined to the narrow focal plane can be imaged and processes that involve translation of the object perpendicular to this plane are not accessible. To overcome this severe limitation of optical microscopy, we developed digital in-line holographic microscopy (DIHM) as a high-resolution tool for the tracking of organisms in three dimensions. We describe here the design and performance of a very simple submersible digital in-line holographic microscope (SDIHM) that can image organisms and their motion with micron resolution and that can be deployed from small vessels. Holograms and reconstructed images of several microscopic marine organisms were successfully obtained down to a depth of 20 m. The maximum depth was limited by the length of data transmission cables available at the time and operating depth in excess of 100 m are easily possible for the instrument.
Robb, Paul D; Craven, Alan J
2008-12-01
An image processing technique is presented for atomic resolution high-angle annular dark-field (HAADF) images that have been acquired using scanning transmission electron microscopy (STEM). This technique is termed column ratio mapping and involves the automated process of measuring atomic column intensity ratios in high-resolution HAADF images. This technique was developed to provide a fuller analysis of HAADF images than the usual method of drawing single intensity line profiles across a few areas of interest. For instance, column ratio mapping reveals the compositional distribution across the whole HAADF image and allows a statistical analysis and an estimation of errors. This has proven to be a very valuable technique as it can provide a more detailed assessment of the sharpness of interfacial structures from HAADF images. The technique of column ratio mapping is described in terms of a [110]-oriented zinc-blende structured AlAs/GaAs superlattice using the 1 angstroms-scale resolution capability of the aberration-corrected SuperSTEM 1 instrument.
A fast double shutter for CCD-based metrology
NASA Astrophysics Data System (ADS)
Geisler, R.
2017-02-01
Image based metrology such as Particle Image Velocimetry (PIV) depends on the comparison of two images of an object taken in fast succession. Cameras for these applications provide the so-called `double shutter' mode: One frame is captured with a short exposure time and in direct succession a second frame with a long exposure time can be recorded. The difference in the exposure times is typically no problem since illumination is provided by a pulsed light source such as a laser and the measurements are performed in a darkened environment to prevent ambient light from accumulating in the long second exposure time. However, measurements of self-luminous processes (e.g. plasma, combustion ...) as well as experiments in ambient light are difficult to perform and require special equipment (external shutters, highspeed image sensors, multi-sensor systems ...). Unfortunately, all these methods incorporate different drawbacks such as reduced resolution, degraded image quality, decreased light sensitivity or increased susceptibility to decalibration. In the solution presented here, off-the-shelf CCD sensors are used with a special timing to combine neighbouring pixels in a binning-like way. As a result, two frames of short exposure time can be captured in fast succession. They are stored in the on-chip vertical register in a line-interleaved pattern, read out in the common way and separated again by software. The two resultant frames are completely congruent; they expose no insensitive lines or line shifts and thus enable sub-pixel accurate measurements. A third frame can be captured at the full resolution analogue to the double shutter technique. Image based measurement techniques such as PIV can benefit from this mode when applied in bright environments. The third frame is useful e.g. for acceleration measurements or for particle tracking applications.
Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data
NASA Astrophysics Data System (ADS)
Hung, C. H.; Chang, W. C.; Chen, L. C.
2016-06-01
With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.
Using photoshop filters to create anatomic line-art medical images.
Kirsch, Jacobo; Geller, Brian S
2006-08-01
There are multiple ways to obtain anatomic drawings suitable for publication or presentations. This article demonstrates how to use Photoshop to alter digital radiologic images to create line-art illustrations in a quick and easy way. We present two simple to use methods; however, not every image can adequately be transformed and personal preferences and specific changes need to be applied to each image to obtain the desired result. There are multiple ways to obtain anatomic drawings suitable for publication or to prepare presentations. Medical illustrators have always played a major role in the radiology and medical education process. Whether used to teach a complex surgical or radiologic procedure, to define typical or atypical patterns of the spread of disease, or to illustrate normal or aberrant anatomy, medical illustration significantly affects learning (). However, if you are not an accomplished illustrator, the alternatives can be expensive (contacting a professional medical illustrator or buying an already existing stock of digital images) or simply not necessarily applicable to what you are trying to communicate. The purpose of this article is to demonstrate how using Photoshop (Adobe Systems, San Jose, CA) to alter digital radiologic images we can create line-art illustrations in a quick, inexpensive, and easy way in preparation for electronic presentations and publication.
MIXI: Mobile Intelligent X-Ray Inspection System
NASA Astrophysics Data System (ADS)
Arodzero, Anatoli; Boucher, Salime; Kutsaev, Sergey V.; Ziskin, Vitaliy
2017-07-01
A novel, low-dose Mobile Intelligent X-ray Inspection (MIXI) concept is being developed at RadiaBeam Technologies. The MIXI concept relies on a linac-based, adaptive, ramped energy source of short X-ray packets of pulses, a new type of fast X-ray detector, rapid processing of detector signals for intelligent control of the linac, and advanced radiography image processing. The key parameters for this system include: better than 3 mm line pair resolution; penetration greater than 320 mm of steel equivalent; scan speed with 100% image sampling rate of up to 15 km/h; and material discrimination over a range of thicknesses up to 200 mm of steel equivalent. Its minimal radiation dose, size and weight allow MIXI to be placed on a lightweight truck chassis.
NASA Astrophysics Data System (ADS)
Migliozzi, D.; Nguyen, H. T.; Gijs, M. A. M.
2018-02-01
Immunohistochemistry (IHC) is one of the main techniques currently used in the clinics for biomarker characterization. It consists in colorimetric labeling with specific antibodies followed by microscopy analysis. The results are then used for diagnosis and therapeutic targeting. Well-known drawbacks of such protocols are their limited accuracy and precision, which prevent the clinicians from having quantitative and robust IHC results. With our work, we combined rapid microfluidic immunofluorescent staining with efficient image-based cell segmentation and signal quantification to increase the robustness of both experimental and analytical protocols. The experimental protocol is very simple and based on fast-fluidic-exchange in a microfluidic chamber created on top of the formalin-fixed-paraffin-embedded (FFPE) slide by clamping it a silicon chip with a polydimethyl siloxane (PDMS) sealing ring. The image-processing protocol is based on enhancement and subsequent thresholding of the local contrast of the obtained fluorescence image. As a case study, given that the human epidermal growth factor receptor 2 (HER2) protein is often used as a biomarker for breast cancer, we applied our method to HER2+ and HER2- cell lines. We report very fast (5 minutes) immunofluorescence staining of both HER2 and cytokeratin (a marker used to define the tumor region) on FFPE slides. The image-processing program can segment cells correctly and give a cell-based quantitative immunofluorescent signal. With this method, we found a reproducible well-defined separation for the HER2-to-cytokeratin ratio for positive and negative control samples.
Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao
2014-11-17
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.
An imaging vector magnetograph for the next solar maximum
NASA Technical Reports Server (NTRS)
Mickey, D. L.; Labonte, B. J.; Canfield, R. C.
1989-01-01
Researchers describe the conceptual design of a new imaging vector magnetograph currently being constructed at the University of Hawaii. The instrument combines a modest solar telescope with a rotating quarter-wave plate, an acousto-optical tunable prefilter as a blocker for a servo-controlled Fabry-Perot etalon, CCD cameras, and on-line digital image processing. Its high spatial resolution (1/2 arcsec pixel size) over a large field of view (5 by 5 arcmin) will be sufficient to significantly measure, for the first time, the magnetic energy dissipated in major solar flares. Its millisecond tunability and wide spectral range (5000 to 7000 A) enable nearly simultaneous vector magnetic field measurements in the gas-pressure-dominated photosphere and magnetically-dominated chromosphere, as well as effective co-alignment with Solar-A's X ray images. Researchers expect to have the instrument in operation at Mees Solar Observatory (Haleakala) in early 1991. They have chosen to use tunable filters as wavelength-selection elements in order to emphasize the spatial relationships between magnetic field elements, and to permit construction of a compact, efficient instrument. This means that spectral information must be obtained from sequences of images, which can cause line profile distortions due to effects of atmospheric seeing.
NASA Astrophysics Data System (ADS)
Zakharov, S. M.; Manykin, Eduard A.
1995-02-01
The principles of optical processing based on dynamic spatial—temporal properties of two-pulse photon echo signals are considered. The properties of a resonant medium as an on-line filter of temporal and spatial frequencies are discussed. These properties are due to the sensitivity of such a medium to the Fourier spectrum of the second exiting pulse. Degeneracy of quantum resonant systems, demonstrated by the coherent response dependence on the square of the amplitude of the second pulse, can be used for 'simultaneous' correlation processing of optical 'signals'. Various methods for the processing of the Fourier optical image are discussed.
Franklin, Robert G; Adams, Reginald B; Steiner, Troy G; Zebrowitz, Leslie A
2018-05-14
Through 3 studies, we investigated whether angularity and roundness present in faces contributes to the perception of anger and joyful expressions, respectively. First, in Study 1 we found that angry expressions naturally contain more inward-pointing lines, whereas joyful expressions contain more outward-pointing lines. Then, using image-processing techniques in Studies 2 and 3, we filtered images to contain only inward-pointing or outward-pointing lines as a way to approximate angularity and roundness. We found that filtering images to be more angular increased how threatening and angry a neutral face was rated, increased how intense angry expressions were rated, and enhanced the recognition of anger. Conversely, filtering images to be rounder increased how warm and joyful a neutral face was rated, increased the intensity of joyful expressions, and enhanced recognition of joy. Together these findings show that angularity and roundness play a direct role in the recognition of angry and joyful expressions. Given evidence that angularity and roundness may play a biological role in indicating threat and safety in the environment, this suggests that angularity and roundness represent primitive facial cues used to signal threat-anger and warmth-joy pairings. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Fensch, J.; Duc, P.-A.; Weilbacher, P. M.; Boquien, M.; Zackrisson, E.
2016-01-01
Context. We present Integral Field Unit (IFU) observations with MUSE and deep imaging with FORS of a dwarf galaxy recently formed within the giant collisional HI ring surrounding NGC 5291. This Tidal Dwarf Galaxy (TDG) -like object has the characteristics of typical z = 1-2 gas-rich spiral galaxies: a high gas fraction, a rather turbulent clumpy interstellar medium, the absence of an old stellar population, and a moderate metallicity and star formation efficiency. Aims: The MUSE spectra allow us to determine the physical conditions within the various complex substructures revealed by the deep optical images and to scrutinize the ionization processes at play in this specific medium at unprecedented spatial resolution. Methods: Starburst age, extinction, and metallicity maps of the TDG and the surrounding regions were determined using the strong emission lines Hβ, [OIII], [OI], [NII], Hα, and [SII] combined with empirical diagnostics. Different ionization mechanisms were distinguished using BPT-like diagrams and shock plus photoionization models. Results: In general, the physical conditions within the star-forming regions are homogeneous, in particular with a uniform half-solar oxygen abundance. On small scales, the derived extinction map shows narrow dust lanes. Regions with atypically strong [OI] emission line immediately surround the TDG. The [OI]/ Hα ratio cannot be easily accounted for by the photoionization by young stars or shock models. At greater distances from the main star-foming clumps, a faint diffuse blue continuum emission is observed, both with the deep FORS images and the MUSE data. It does not have a clear counterpart in the UV regime probed by GALEX. A stacked spectrum towards this region does not exhibit any emission line, excluding faint levels of star formation, or stellar absorption lines that might have revealed the presence of old stars. Several hypotheses are discussed for the origin of these intriguing features. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile: ESO MUSE programme 60.A-9320(A) and FORS programme 382.B-0213(A).
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
Watermarking and copyright labeling of printed images
NASA Astrophysics Data System (ADS)
Hel-Or, Hagit Z.
2001-07-01
Digital watermarking is a labeling technique for digital images which embeds a code into the digital data so the data are marked. Watermarking techniques previously developed deal with on-line digital data. These techniques have been developed to withstand digital attacks such as image processing, image compression and geometric transformations. However, one must also consider the readily available attack of printing and scanning. The available watermarking techniques are not reliable under printing and scanning. In fact, one must consider the availability of watermarks for printed images as well as for digital images. An important issue is to intercept and prevent forgery in printed material such as currency notes, back checks, etc. and to track and validate sensitive and secrete printed material. Watermarking in such printed material can be used not only for verification of ownership but as an indicator of date and type of transaction or date and source of the printed data. In this work we propose a method of embedding watermarks in printed images by inherently taking advantage of the printing process. The method is visually unobtrusive to the printed image, the watermark is easily extracted and is robust under reconstruction errors. The decoding algorithm is automatic given the watermarked image.
2015-09-10
This image of Pluto from NASA's New Horizons spacecraft, processed in two different ways, shows how Pluto's bright, high-altitude atmospheric haze produces a twilight that softly illuminates the surface before sunrise and after sunset, allowing the sensitive cameras on New Horizons to see details in nighttime regions that would otherwise be invisible. The right-hand version of the image has been greatly brightened to bring out faint details of rugged haze-lit topography beyond Pluto's terminator, which is the line separating day and night. The image was taken as New Horizons flew past Pluto on July 14, 2015, from a distance of 50,000 miles (80,000 kilometers). http://photojournal.jpl.nasa.gov/catalog/PIA19931
Dubinsky, Theodore J; Shah, Hardik; Sonneborn, Rachelle; Hippe, Daniel S
2017-11-01
We prospectively identified B-lines in patients undergoing ultrasonographic (US) examinations following liver transplantation who also had chest radiography (CXR) or chest CT imaging, or both, on the same day to determine if an association between the presence of B-lines from the thorax on US images correlates with the presence of lung abnormalities on CXR. Following institutional review board (IRB) approval, patients who received liver transplants and underwent routine US examinations and chest radiography or CT imaging, or both, on the same day between January 1, 2015 through July 1, 2016 were prospectively identified. Two readers who were blinded to chest films and CT images and reports independently reviewed the US interreader agreement for the presence or absence of B-lines and performed an evaluation for the presence or absence of diffuse parenchymal lung disease (DPLD) on chest films and CT images as well as from clinical evaluation. Receiver operating characteristic (ROC) curves were constructed. There was good agreement between the two readers on the presence of absence of B-lines (kappa = 0.94). The area under the ROC curve for discriminating between positive DPLD and negative DPLD for both readers was 0.79 (95% CI, 0.71-0.87). There is an association between the presence of extensive B-lines to the point of confluence and "dirty shadowing" on US examinations of the chest and associated findings on chest radiographs and CT scans of DPLD. Conversely, isolated B-lines do not always correlate with abnormalities on chest films and in fact sometimes appear to be a normal variant. Copyright © 2017 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
High-Speed Edge-Detecting Line Scan Smart Camera
NASA Technical Reports Server (NTRS)
Prokop, Norman F.
2012-01-01
A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..
Practical Considerations for Optic Nerve Estimation in Telemedicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Aykac, Deniz; Chaum, Edward
The projected increase in diabetes in the United States and worldwide has created a need for broad-based, inexpensive screening for diabetic retinopathy (DR), an eye disease which can lead to vision impairment. A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion / anomaly detection is a low-cost way of achieving broad-based screening. In this work we report on the effect of quality estimation on an optic nerve (ON) detection method with a confidence metric. We report on an improvement of the fusion technique using a data set from an ophthalmologists practice then show themore » results of the method as a function of image quality on a set of images from an on-line telemedicine network collected in Spring 2009 and another broad-based screening program. We show that the fusion method, combined with quality estimation processing, can improve detection performance and also provide a method for utilizing a physician-in-the-loop for images that may exceed the capabilities of automated processing.« less
PCA based clustering for brain tumor segmentation of T1w MRI images.
Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay
2017-03-01
Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Platform for Postprocessing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don
2008-01-01
Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).
USDA-ARS?s Scientific Manuscript database
This research developed a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet/blue LED excitation for detection of fecal contamination on Golden Delicious apples. Using a hyperspectral line-scan imaging system consisting of an EMCCD camera, spectrograph, an...
LWIR hyperspectral micro-imager for detection of trace explosive particles
NASA Astrophysics Data System (ADS)
Bingham, Adam L.; Lucey, Paul G.; Akagi, Jason T.; Hinrichs, John L.; Knobbe, Edward T.
2014-05-01
Chemical micro-imaging is a powerful tool for the detection and identification of analytes of interest against a cluttered background (i.e. trace explosive particles left behind in a fingerprint). While a variety of groups have demonstrated the efficacy of Raman instruments for these applications, point by point or line by line acquisition of a targeted field of view (FOV) is a time consuming process if it is to be accomplished with useful spatial resolutions. Spectrum Photonics has developed and demonstrated a prototype system utilizing long wave infrared hyperspectral microscopy, which enables the simultaneous collection of LWIR reflectance spectra from 8-14 μm in a 30 x 7 mm FOV with 30 μm spatial resolution in 30 s. An overview of the uncooled Sagnac-based LWIR HSM system will be given, emphasizing the benefits of this approach. Laboratory Hyperspectral data collected from custom mixtures and fingerprint residues is shown, focusing on the ability of the LWIR chemical micro-imager to detect chemicals of interest out of a cluttered background.
A Design Verification of the Parallel Pipelined Image Processings
NASA Astrophysics Data System (ADS)
Wasaki, Katsumi; Harai, Toshiaki
2008-11-01
This paper presents a case study of the design and verification of a parallel and pipe-lined image processing unit based on an extended Petri net, which is called a Logical Colored Petri net (LCPN). This is suitable for Flexible-Manufacturing System (FMS) modeling and discussion of structural properties. LCPN is another family of colored place/transition-net(CPN) with the addition of the following features: integer value assignment of marks, representation of firing conditions as marks' value based formulae, and coupling of output procedures with transition firing. Therefore, to study the behavior of a system modeled with this net, we provide a means of searching the reachability tree for markings.
Using dark current data to estimate AVIRIS noise covariance and improve spectral analyses
NASA Technical Reports Server (NTRS)
Boardman, Joseph W.
1995-01-01
Starting in 1994, all AVIRIS data distributions include a new product useful for quantification and modeling of the noise in the reported radiance data. The 'postcal' file contains approximately 100 lines of dark current data collected at the end of each data acquisition run. In essence this is a regular spectral-image cube, with 614 samples, 100 lines and 224 channels, collected with a closed shutter. Since there is no incident radiance signal, the recorded DN measure only the DC signal level and the noise in the system. Similar dark current measurements, made at the end of each line are used, with a 100 line moving average, to remove the DC signal offset. Therefore, the pixel-by-pixel fluctuations about the mean of this dark current image provide an excellent model for the additive noise that is present in AVIRIS reported radiance data. The 61,400 dark current spectra can be used to calculate the noise levels in each channel and the noise covariance matrix. Both of these noise parameters should be used to improve spectral processing techniques. Some processing techniques, such as spectral curve fitting, will benefit from a robust estimate of the channel-dependent noise levels. Other techniques, such as automated unmixing and classification, will be improved by the stable and scene-independence noise covariance estimate. Future imaging spectrometry systems should have a similar ability to record dark current data, permitting this noise characterization and modeling.
In-Line Sorting of Harumanis Mango Based on External Quality Using Visible Imaging
Ibrahim, Mohd Firdaus; Ahmad Sa’ad, Fathinul Syahir; Zakaria, Ammar; Md Shakaff, Ali Yeon
2016-01-01
The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t-test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass. PMID:27801799
Line edge roughness (LER) mitigation studies specific to interference-like lithography
NASA Astrophysics Data System (ADS)
Baylav, Burak; Estroff, Andrew; Xie, Peng; Smith, Bruce W.
2013-04-01
Line edge roughness (LER) is a common problem to most lithography approaches and is seen as the main resolution limiter for advanced technology nodes1. There are several contributors to LER such as chemical/optical shot noise, random nature of acid diffusion, development process, and concentration of acid generator/base quencher. Since interference-like lithography (IL) is used to define one directional gridded patterns, some LER mitigation approaches specific to IL-like imaging can be explored. Two methods investigated in this work for this goal are (i) translational image averaging along the line direction and (ii) pupil plane filtering. Experiments regarding the former were performed on both interferometric and projection lithography systems. Projection lithography experiments showed a small amount of reduction in low/mid frequency LER value for image averaged cases at pitch of 150 nm (193 nm illumination, 0.93 NA) with less change for smaller pitches. Aerial image smearing did not significantly increase LER since it was directional. Simulation showed less than 1% reduction in NILS (compared to a static, smooth mask equivalent) with ideal alignment. In addition, description of pupil plane filtering on the transfer of mask roughness is given. When astigmatism-like aberrations were introduced in the pupil, transfer of mask roughness is decreased at best focus. It is important to exclude main diffraction orders from the filtering to prevent contrast and NILS loss. These ideas can be valuable as projection lithography approaches to conditions similar to IL (e.g. strong RET methods).
In-Line Sorting of Harumanis Mango Based on External Quality Using Visible Imaging.
Ibrahim, Mohd Firdaus; Ahmad Sa'ad, Fathinul Syahir; Zakaria, Ammar; Md Shakaff, Ali Yeon
2016-10-27
The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t -test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass.
Wide-Field Imaging Interferometry Spatial-Spectral Image Synthesis Algorithms
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Leisawitz, David T.; Rinehart, Stephen A.; Memarsadeghi, Nargess; Sinukoff, Evan J.
2012-01-01
Developed is an algorithmic approach for wide field of view interferometric spatial-spectral image synthesis. The data collected from the interferometer consists of a set of double-Fourier image data cubes, one cube per baseline. These cubes are each three-dimensional consisting of arrays of two-dimensional detector counts versus delay line position. For each baseline a moving delay line allows collection of a large set of interferograms over the 2D wide field detector grid; one sampled interferogram per detector pixel per baseline. This aggregate set of interferograms, is algorithmically processed to construct a single spatial-spectral cube with angular resolution approaching the ratio of the wavelength to longest baseline. The wide field imaging is accomplished by insuring that the range of motion of the delay line encompasses the zero optical path difference fringe for each detector pixel in the desired field-of-view. Each baseline cube is incoherent relative to all other baseline cubes and thus has only phase information relative to itself. This lost phase information is recovered by having point, or otherwise known, sources within the field-of-view. The reference source phase is known and utilized as a constraint to recover the coherent phase relation between the baseline cubes and is key to the image synthesis. Described will be the mathematical formalism, with phase referencing and results will be shown using data collected from NASA/GSFC Wide-Field Imaging Interferometry Testbed (WIIT).
NASA Astrophysics Data System (ADS)
Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin
2017-03-01
Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.
Revisiting Abell 2744: a powerful synergy of GLASS spectroscopy and HFF photometry
NASA Astrophysics Data System (ADS)
Wang, Xin; Wang
We present new emission line identifications and improve the lensing reconstruction of the mass distribution of galaxy cluster Abell 2744 using the Grism Lens-Amplified Survey from Space (GLASS) spectroscopy and the Hubble Frontier Fields (HFF) imaging. We performed blind and targeted searches for faint line emitters on all objects, including the arc sample, within the field of view (FoV) of GLASS prime pointings. We report 55 high quality spectroscopic redshifts, 5 of which are for arc images. We also present an extensive analysis based on the HFF photometry, measuring the colors and photometric redshifts of all objects within the FoV, and comparing the spectroscopic and photometric redshift estimates. In order to improve the lens model of Abell 2744, we develop a rigorous algorithm to screen arc images, based on their colors and morphology, and selecting the most reliable ones to use. As a result, 25 systems (corresponding to 72 images) pass the screening process and are used to reconstruct the gravitational potential of the cluster pixellated on an adaptive mesh. The resulting total mass distribution is compared with a stellar mass map obtained from the Spitzer Frontier Fields data in order to study the relative distribution of stars and dark matter in the cluster.
Harvesting geographic features from heterogeneous raster maps
NASA Astrophysics Data System (ADS)
Chiang, Yao-Yi
2010-11-01
Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial dataset. The road vectorization and text recognition results outperform state-of-art commercial products, and with considerably less user input. The approach in this thesis allows us to make use of the geospatial information of heterogeneous maps locked in raster format.
Image processing for stripper harvested cotton trash content measurement a progress report
USDA-ARS?s Scientific Manuscript database
This study was initiated to provide the basis for obtaining on-line information as to the levels of the various types of gin trash. The objective is to provide the ginner with knowledge of the quantity of the various trash components in the raw uncleaned seed cotton. This information is currently no...
Monitoring Coating Thickness During Plasma Spraying
NASA Technical Reports Server (NTRS)
Miller, Robert A.
1990-01-01
High-resolution video measures thickness accurately without interfering with process. Camera views cylindrical part through filter during plasma spraying. Lamp blacklights part, creating high-contrast silhouette on video monitor. Width analyzer counts number of lines in image of part after each pass of spray gun. Layer-by-layer measurements ensure adequate coat built up without danger of exceeding required thickness.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian
2017-01-01
In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.
Marra, Molly H; Tobias, Zachary J C; Cohen, Hannah R; Glover, Greta; Weissman, Tamily A
2015-01-01
The lateral line sensory system in fish detects movements in the water and allows fish to respond to predators, prey, and other stimuli. As the lateral line forms in the first two days of zebrafish development, axons extend caudally along the lateral surface of the fish, eventually forming synapses with hair cells of neuromasts. Growing lateral line axons are located superficially under the skin and can be labeled in living zebrafish using fluorescent protein expression. This system provides a relatively straightforward approach for in vivo time-lapse imaging of neuronal development in an undergraduate setting. Here we describe an upper-level neurobiology laboratory module in which students investigate aspects of axonal development in the zebrafish lateral line system. Students learn to handle and image living fish, collect time-lapse videos of moving mitochondria, and quantitatively measure mitochondrial dynamics by generating and analyzing kymographs of their movements. Energy demands may differ between axons with extending growth cones versus axons that have already reached their targets and are forming synapses. Since relatively little is known about this process in developing lateral line axons, students generate and test their own hypotheses regarding how mitochondrial dynamics may differ at two different time points in axonal development. Students also learn to incorporate into their analysis a powerful yet accessible quantitative tool, the kymograph, which is used to graph movement over time. After students measure and quantify dynamics in living fish at 1 and 2 days post fertilization, this module extends into independent projects, in which students can expand their studies in a number of different, inquiry-driven directions. The project can also be pared down for courses that wish to focus solely on the quantitative analysis (without fish handling), or vice versa. This research module provides a useful approach for the design of open-ended laboratory research projects that integrate the scientific process into undergraduate Biology courses, as encouraged by the AAAS and NSF Vision and Change Initiative.
A novel line segment detection algorithm based on graph search
NASA Astrophysics Data System (ADS)
Zhao, Hong-dan; Liu, Guo-ying; Song, Xu
2018-02-01
To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).
NASA Astrophysics Data System (ADS)
Molinari, Filippo; Acharya, Rajendra; Zeng, Guang; Suri, Jasjit S.
2011-03-01
The carotid intima-media thickness (IMT) is the most used marker for the progression of atherosclerosis and onset of the cardiovascular diseases. Computer-aided measurements improve accuracy, but usually require user interaction. In this paper we characterized a new and completely automated technique for carotid segmentation and IMT measurement based on the merits of two previously developed techniques. We used an integrated approach of intelligent image feature extraction and line fitting for automatically locating the carotid artery in the image frame, followed by wall interfaces extraction based on Gaussian edge operator. We called our system - CARES. We validated the CARES on a multi-institutional database of 300 carotid ultrasound images. IMT measurement bias was 0.032 +/- 0.141 mm, better than other automated techniques and comparable to that of user-driven methodologies. Our novel approach of CARES processed 96% of the images leading to the figure of merit to be 95.7%. CARES ensured complete automation and high accuracy in IMT measurement; hence it could be a suitable clinical tool for processing of large datasets in multicenter studies involving atherosclerosis.pre-
Robust feature matching via support-line voting and affine-invariant ratios
NASA Astrophysics Data System (ADS)
Li, Jiayuan; Hu, Qingwu; Ai, Mingyao; Zhong, Ruofei
2017-10-01
Robust image matching is crucial for many applications of remote sensing and photogrammetry, such as image fusion, image registration, and change detection. In this paper, we propose a robust feature matching method based on support-line voting and affine-invariant ratios. We first use popular feature matching algorithms, such as SIFT, to obtain a set of initial matches. A support-line descriptor based on multiple adaptive binning gradient histograms is subsequently applied in the support-line voting stage to filter outliers. In addition, we use affine-invariant ratios computed by a two-line structure to refine the matching results and estimate the local affine transformation. The local affine model is more robust to distortions caused by elevation differences than the global affine transformation, especially for high-resolution remote sensing images and UAV images. Thus, the proposed method is suitable for both rigid and non-rigid image matching problems. Finally, we extract as many high-precision correspondences as possible based on the local affine extension and build a grid-wise affine model for remote sensing image registration. We compare the proposed method with six state-of-the-art algorithms on several data sets and show that our method significantly outperforms the other methods. The proposed method achieves 94.46% average precision on 15 challenging remote sensing image pairs, while the second-best method, RANSAC, only achieves 70.3%. In addition, the number of detected correct matches of the proposed method is approximately four times the number of initial SIFT matches.
Single step high-speed printing of continuous silver lines by laser-induced forward transfer
NASA Astrophysics Data System (ADS)
Puerto, D.; Biver, E.; Alloncle, A.-P.; Delaporte, Ph.
2016-06-01
The development of high-speed ink printing process by Laser-Induced Forward Transfer (LIFT) is of great interest for the printing community. To address the problems and the limitations of this process that have been previously identified, we have performed an experimental study on laser micro-printing of silver nanoparticle inks by LIFT and demonstrated for the first time the printing of continuous conductive lines in a single pass at velocities of 17 m/s using a 1 MHz repetition rate laser. We investigated the printing process by means of a time-resolved imaging technique to visualize the ejection dynamics of single and adjacent jets. The control of the donor film properties is of prime importance to achieve single step printing of continuous lines at high velocities. We use a 30 ps pulse duration laser with a wavelength of 343 nm and a repetition rate from 0.2 to 1 MHz. A galvanometric mirror head controls the distance between two consecutives jets by scanning the focused beam along an ink-coated donor substrate at different velocities. Droplets and lines of silver inks are laser-printed on glass and PET flexible substrates and we characterized their morphological quality by atomic force microscope (AFM) and optical microscope.
Avrin, D E; Andriole, K P; Yin, L; Gould, R G; Arenson, R L
2001-03-01
A hierarchical storage management (HSM) scheme for cost-effective on-line archival of image data using lossy compression is described. This HSM scheme also provides an off-site tape backup mechanism and disaster recovery. The full-resolution image data are viewed originally for primary diagnosis, then losslessly compressed and sent off site to a tape backup archive. In addition, the original data are wavelet lossy compressed (at approximately 25:1 for computed radiography, 10:1 for computed tomography, and 5:1 for magnetic resonance) and stored on a large RAID device for maximum cost-effective, on-line storage and immediate retrieval of images for review and comparison. This HSM scheme provides a solution to 4 problems in image archiving, namely cost-effective on-line storage, disaster recovery of data, off-site tape backup for the legal record, and maximum intermediate storage and retrieval through the use of on-site lossy compression.
Language-motor interference reflected in MEG beta oscillations.
Klepp, Anne; Niccolai, Valentina; Buccino, Giovanni; Schnitzler, Alfons; Biermann-Ruben, Katja
2015-04-01
The involvement of the brain's motor system in action-related language processing can lead to overt interference with simultaneous action execution. The aim of the current study was to find evidence for this behavioural interference effect and to investigate its neurophysiological correlates using oscillatory MEG analysis. Subjects performed a semantic decision task on single action verbs, describing actions executed with the hands or the feet, and abstract verbs. Right hand button press responses were given for concrete verbs only. Therefore, longer response latencies for hand compared to foot verbs should reflect interference. We found interference effects to depend on verb imageability: overall response latencies for hand verbs did not differ significantly from foot verbs. However, imageability interacted with effector: while response latencies to hand and foot verbs with low imageability were equally fast, those for highly imageable hand verbs were longer than for highly imageable foot verbs. The difference is reflected in motor-related MEG beta band power suppression, which was weaker for highly imageable hand verbs compared with highly imageable foot verbs. This provides a putative neuronal mechanism for language-motor interference where the involvement of cortical hand motor areas in hand verb processing interacts with the typical beta suppression seen before movements. We found that the facilitatory effect of higher imageability on action verb processing time is perturbed when verb and motor response relate to the same body part. Importantly, this effect is accompanied by neurophysiological effects in beta band oscillations. The attenuated power suppression around the time of movement, reflecting decreased cortical excitability, seems to result from motor simulation during action-related language processing. This is in line with embodied cognition theories. Copyright © 2015. Published by Elsevier Inc.
Interactive digital image manipulation system
NASA Technical Reports Server (NTRS)
Henze, J.; Dezur, R.
1975-01-01
The system is designed for manipulation, analysis, interpretation, and processing of a wide variety of image data. LANDSAT (ERTS) and other data in digital form can be input directly into the system. Photographic prints and transparencies are first converted to digital form with an on-line high-resolution microdensitometer. The system is implemented on a Hewlett-Packard 3000 computer with 128 K bytes of core memory and a 47.5 megabyte disk. It includes a true color display monitor, with processing memories, graphics overlays, and a movable cursor. Image data formats are flexible so that there is no restriction to a given set of remote sensors. Conversion between data types is available to provide a basis for comparison of the various data. Multispectral data is fully supported, and there is no restriction on the number of dimensions. In this way multispectral data collected at more than one point in time may simply be treated as a data collected with twice (three times, etc.) the number of sensors. There are various libraries of functions available to the user: processing functions, display functions, system functions, and earth resources applications functions.
3D Imaging of Density Gradients Using Plenoptic BOS
NASA Astrophysics Data System (ADS)
Klemkowsky, Jenna; Clifford, Chris; Fahringer, Timothy; Thurow, Brian
2016-11-01
The combination of background oriented schlieren (BOS) and a plenoptic camera, termed Plenoptic BOS, is explored through two proof-of-concept experiments. The motivation of this work is to provide a 3D technique capable of observing density disturbances. BOS uses the relationship between density and refractive index gradients to observe an apparent shift in a patterned background through image comparison. Conventional BOS systems acquire a single line-of-sight measurement, and require complex configurations to obtain 3D measurements, which are not always conducive to experimental facilities. Plenoptic BOS exploits the plenoptic camera's ability to generate multiple perspective views and refocused images from a single raw plenoptic image during post processing. Using such capabilities, with regards to BOS, provides multiple line-of-sight measurements of density disturbances, which can be collectively used to generate refocused BOS images. Such refocused images allow the position of density disturbances to be qualitatively and quantitatively determined. The image that provides the sharpest density gradient signature corresponds to a specific depth. These results offer motivation to advance Plenoptic BOS with an ultimate goal of reconstructing a 3D density field.
Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.
NASA Astrophysics Data System (ADS)
Dodd, Stirling Scott
1995-01-01
Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.
NASA Astrophysics Data System (ADS)
Prijono, Agus; Darmawan Hangkawidjaja, Aan; Ratnadewi; Saleh Ahmar, Ansari
2018-01-01
The verification to person who is used today as a fingerprint, signature, personal identification number (PIN) in the bank system, identity cards, attendance, easily copied and forged. This causes the system not secure and is vulnerable to unauthorized persons to access the system. In this research will be implemented verification system using the image of the blood vessels in the back of the palms as recognition more difficult to imitate because it is located inside the human body so it is safer to use. The blood vessels located at the back of the human hand is unique, even humans twins have a different image of the blood vessels. Besides the image of the blood vessels do not depend on a person’s age, so it can be used for long term, except in the case of an accident, or disease. Because of the unique vein pattern recognition can be used in a person. In this paper, we used a modification method to perform the introduction of a person based on the image of the blood vessel that is using Modified Local Line Binary Pattern (MLLBP). The process of matching blood vessel image feature extraction using Hamming Distance. Test case of verification is done by calculating the percentage of acceptance of the same person. Rejection error occurs if a person was not matched by the system with the data itself. The 10 person with 15 image compared to 5 image vein for each person is resulted 80,67% successful Another test case of the verification is done by verified two image from different person that is forgery, and the verification will be true if the system can rejection the image forgery. The ten different person is not verified and the result is obtained 94%.
Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow.
Zhang, Weilong; Guo, Bingxuan; Li, Ming; Liao, Xuan; Li, Wenzhuo
2018-04-16
Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images.
Danovi, Davide; Folarin, Amos A; Baranowski, Bart; Pollard, Steven M
2012-01-01
Small molecules with potent biological effects on the fate of normal and cancer-derived stem cells represent both useful research tools and new drug leads for regenerative medicine and oncology. Long-term expansion of mouse and human neural stem cells is possible using adherent monolayer culture. These cultures represent a useful cellular resource to carry out image-based high content screening of small chemical libraries. Improvements in automated microscopy, desktop computational power, and freely available image processing tools, now means that such chemical screens are realistic to undertake in individual academic laboratories. Here we outline a cost effective and versatile time lapse imaging strategy suitable for chemical screening. Protocols are described for the handling and screening of human fetal Neural Stem (NS) cell lines and their malignant counterparts, Glioblastoma-derived neural stem cells (GNS). We focus on identification of cytostatic and cytotoxic "hits" and discuss future possibilities and challenges for extending this approach to assay lineage commitment and differentiation. Copyright © 2012 Elsevier Inc. All rights reserved.
Nakada, Tsutomu; Matsuzawa, Hitoshi; Fujii, Yukihiko; Takahashi, Hitoshi; Nishizawa, Masatoyo; Kwee, Ingrid L
2006-07-01
Clinical magnetic resonance imaging (MRI) has recently entered the "high-field" era, and systems equipped with 3.0-4.0T superconductive magnets are becoming the gold standard for diagnostic imaging. While higher signal-to-noise ratio (S/N) is a definite advantage of higher field systems, higher susceptibility effect remains to be a significant trade-off. To take advantage of a higher field system in performing routine clinical images of higher anatomical resolution, we implemented a vector contrast image technique to 3.0T imaging, three-dimensional anisotropy contrast (3DAC), with a PROPELLER (Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction) sequence, a method capable of effectively eliminating undesired artifacts on rapid diffusion imaging sequences. One hundred subjects (20 normal volunteers and 80 volunteers with various central nervous system diseases) participated in the study. Anisotropic diffusion-weighted PROPELLER images were obtained on a General Electric (Waukesha, WI, USA) Signa 3.0T for each axis, with b-value of 1100 sec/mm(2). Subsequently, 3DAC images were constructed using in-house software written on MATLAB (MathWorks, Natick, MA, USA). The vector contrast allows for providing exquisite anatomical detail illustrated by clear identification of all major tracts through the entire brain. 3DAC images provide better anatomical resolution for brainstem glioma than higher-resolution T2 reversed images. Degenerative processes of disease-specific tracts were clearly identified as illustrated in cases of multiple system atrophy and Joseph-Machado disease. Anatomical images of significantly higher resolution than the best current standard, T2 reversed images, were successfully obtained. As a technique readily applicable under routine clinical setting, 3DAC PROPELLER on a 3.0T system will be a powerful addition to diagnostic imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rathore, Kavita, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in; Munshi, Prabhat, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in; Bhattacharjee, Sudeep, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in
A new non-invasive diagnostic system is developed for Microwave Induced Plasma (MIP) to reconstruct tomographic images of a 2D emission profile. A compact MIP system has wide application in industry as well as research application such as thrusters for space propulsion, high current ion beams, and creation of negative ions for heating of fusion plasma. Emission profile depends on two crucial parameters, namely, the electron temperature and density (over the entire spatial extent) of the plasma system. Emission tomography provides basic understanding of plasmas and it is very useful to monitor internal structure of plasma phenomena without disturbing its actualmore » processes. This paper presents development of a compact, modular, and versatile Optical Emission Tomography (OET) tool for a cylindrical, magnetically confined MIP system. It has eight slit-hole cameras and each consisting of a complementary metal–oxide–semiconductor linear image sensor for light detection. The optical noise is reduced by using aspheric lens and interference band-pass filters in each camera. The entire cylindrical plasma can be scanned with automated sliding ring mechanism arranged in fan-beam data collection geometry. The design of the camera includes a unique possibility to incorporate different filters to get the particular wavelength light from the plasma. This OET system includes selected band-pass filters for particular argon emission 750 nm, 772 nm, and 811 nm lines and hydrogen emission H{sub α} (656 nm) and H{sub β} (486 nm) lines. Convolution back projection algorithm is used to obtain the tomographic images of plasma emission line. The paper mainly focuses on (a) design of OET system in detail and (b) study of emission profile for 750 nm argon emission lines to validate the system design.« less
Integration of medical imaging into a multi-institutional hospital information system structure.
Dayhoff, R E
1995-01-01
The Department of Veterans Affairs (VA) is providing integrated text and image data to its clinical users at its Washington and Baltimore medical centers and, soon, at nine other medical centers. The DHCP Imaging System records clinically significant diagnostic images selected by medical specialists in a variety of departments, including cardiology, gastroenterology, pathology, dermatology, surgery, radiology, podiatry, dentistry, and emergency medicine. These images, which include color and gray scale images, and electrocardiogram waveforms, are displayed on workstations located throughout the medical centers. Integration of clinical images with the VA's electronic mail system allows transfer of data from one medical center to another. The ability to incorporate transmitted text and image data into on-line patient records at the collaborating sites is an important aspect of professional consultation. In order to achieve the maximum benefits from an integrated patient record system, a critical mass of information must be available for clinicians. When there is also seamless support for administration, it becomes possible to re-engineer the processes involved in providing medical care.
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-03-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. © 2013 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-01-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adel, G.T.; Luttrell, G.H.
Automatic control of fine coal cleaning circuits has traditionally been limited by the lack of sensors for on-line ash analysis. Although several nuclear-based analyzers are available, none have seen widespread acceptance. This is largely due to the fact that nuclear sensors are expensive and tend to be influenced by changes in seam type and pyrite content. Recently, researchers at VPI&SU have developed an optical sensor for phosphate analysis. The sensor uses image processing technology to analyze video images of phosphate ore. It is currently being used by PCS Phosphate for off-line analysis of dry flotation concentrate. The primary advantages ofmore » optical sensors over nuclear sensors are that hey are significantly cheaper, are not subject to measurement variations due to changes in high atomic number materials, are inherently safer and require no special radiation permitting. The purpose of this work is to apply the knowledge gained in the development of an optical phosphate analyzer to the development of an on-line ash analyzer for fine coal slurries. During the past quarter, the current prototype of the on-line optical ash analyzer was subjected to extensive testing at the Middlefork coal preparation plant. Initial work focused on obtaining correlations between ash content and mean gray level, while developmental work on the more comprehensive neural network calibration approach continued. Test work to date shows a promising trend in the correlation between ash content and mean gray level. Unfortunately, data scatter remains significant. Recent tests seem to eliminate variations in percent solids, particle size distribution, measurement angle and light setting as causes for the data scatter; however, equipment warm-up time and number of images taken per measurement appear to have a significant impact on the gray-level values obtained. 8 figs., 8 tabs.« less
Nonuniformity correction of imaging systems with a spatially nonhomogeneous radiation source.
Gutschwager, Berndt; Hollandt, Jörg
2015-12-20
We present a novel method of nonuniformity correction of imaging systems in a wide optical spectral range by applying a radiation source with an unknown and spatially nonhomogeneous radiance or radiance temperature distribution. The benefit of this method is that it can be applied with radiation sources of arbitrary spatial radiance or radiance temperature distribution and only requires the sufficient temporal stability of this distribution during the measurement process. The method is based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogenous radiance distribution and a thermal imager of a predefined nonuniform focal plane array responsivity is presented.
The Characteristic Dimension of Lyman-Alpha Forest Clouds Toward Q0957+561
NASA Technical Reports Server (NTRS)
Dolan, J. F.; Michalitsianos, A. G.; Hill, R. J.; Nguyen, Q. T.; Fisher, Richard (Technical Monitor)
2000-01-01
Far-ultraviolet spectra of the gravitational lens components Q0957+561 A and B were obtained with the Hubble Space Telescope Faint Object Spectrograph to investigate the characteristic dimension of Lyman-alpha forest clouds in the direction of the quasar. If one makes the usual assumption that the absorbing structures are spherical clouds with a single radius, that radius can be found analytically from the ratio of Lyman-alpha lines in only one line of sight to the number in both. A simple power series approximation to this solution, accurate everywhere to better than 1%, will be presented. Absorption lines in Q0957+561 having equivalent width greater than 0.3 A in the observer's frame not previously identified as interstellar lines, metal lines, or higher order Lyman lines were taken to be Ly-alpha forest lines. The existence of each line in this consistently selected set was then verified by its presence in two archival FOS spectra with approximately 1.5 times higher signal to noise than our spectra. Ly-alpha forest lines appear at 41 distinct wavelengths in the spectra of the two images. One absorption line in the spectrum of image A has no counterpart in the spectrum of image B, and one line in image B has no counterpart in image A. Based on the separation of the lines of sight over the redshift range searched for Ly-alpha forest lines, the density of the absorbing clouds in the direction of Q0957+561 must change significantly over a radius R = 160 (+120, -70) h (sup -1) (sub 50) kpc (H (sub 0) 50 h (sub 50) km s (sup -1) kpc (sup -1), q (sub 0) = 1/2). The 95% confidence interval on R extends from (50 950) h (sup -1) (sub 50) kpc.
A-Track: A new approach for detection of moving objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2016-10-01
We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.
Kweon, Tae Dong; Kim, Ji Young; Lee, Hye Yeon; Kim, Myung Hwa; Lee, Youn-Woo
2014-01-01
Cervical medial branch blocks are used to treat patients with chronic neck pain. The aim of this study was to clarify the anatomical aspects of the cervical medial branches to improve the accuracy and safety of radiofrequency denervation. Twenty cervical specimens were harvested from 20 adult cadavers. The anatomical parameters of the C4-C7 cervical medial branches were measured. The 3-dimensional computed tomography reconstruction images of the bone were also analyzed. Based on cadaveric analysis, most of the cervical dorsal rami gave off 1 medial branch; however, the cervical dorsal rami gave off 2 medial branches in 27%, 15%, 2%, and 0% at the vertebral level C4, C5, C6, and C7, respectively. The diameters of the medial branches varied from 1.0 to 1.2 mm, and the average distance from the notch of inferior articular process to the medial branches was about 2 mm. Most of the bifurcation sites were located at the medial side of the posterior tubercle of the transverse process. On the analysis of 3-dimensional computed tomography reconstruction images, cervical medial branches (C4 to C6) passed through the upper 49% to 53% of a line between the tips of 2 consecutive superior articular processes (anterior line). Also, cervical medial branches passed through the upper 28% to 35% of a line between the midpoints of 2 consecutive facet joints (midline). The present anatomical study may help improve accuracy and safety during radiofrequency denervation of the cervical medial branches.
Classification of Palmprint Using Principal Line
NASA Astrophysics Data System (ADS)
Prasad, Munaga V. N. K.; Kumar, M. K. Pramod; Sharma, Kuldeep
In this paper, a new classification scheme for palmprint is proposed. Palmprint is one of the reliable physiological characteristics that can be used to authenticate an individual. Palmprint classification provides an important indexing mechanism in a very large palmprint database. Here, the palmprint database is initially categorized into two groups, right hand group and left hand group. Then, each group is further classified based on the distance traveled by principal line i.e. Heart Line During pre processing, a rectangular Region of Interest (ROI) in which only heart line is present, is extracted. Further, ROI is divided into 6 regions and depending upon the regions in which the heart line traverses the palmprint is classified accordingly. Consequently, our scheme allows 64 categories for each group forming a total number of 128 possible categories. The technique proposed in this paper includes only 15 such categories and it classifies not more than 20.96% of the images into a single category.
High-throughput Raman chemical imaging for evaluating food safety and quality
NASA Astrophysics Data System (ADS)
Qin, Jianwei; Chao, Kuanglin; Kim, Moon S.
2014-05-01
A line-scan hyperspectral system was developed to enable Raman chemical imaging for large sample areas. A custom-designed 785 nm line-laser based on a scanning mirror serves as an excitation source. A 45° dichroic beamsplitter reflects the laser light to form a 24 cm x 1 mm excitation line normally incident on the sample surface. Raman signals along the laser line are collected by a detection module consisting of a dispersive imaging spectrograph and a CCD camera. A hypercube is accumulated line by line as a motorized table moves the samples transversely through the laser line. The system covers a Raman shift range of -648.7-2889.0 cm-1 and a 23 cm wide area. An example application, for authenticating milk powder, was presented to demonstrate the system performance. In four minutes, the system acquired a 512x110x1024 hypercube (56,320 spectra) from four 47-mm-diameter Petri dishes containing four powder samples. Chemical images were created for detecting two adulterants (melamine and dicyandiamide) that had been mixed into the milk powder.
Completely optical orientation determination for an unstabilized aerial three-line camera
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2010-10-01
Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.
Predicting cotton yield of small field plots in a cotton breeding program using UAV imagery data
NASA Astrophysics Data System (ADS)
Maja, Joe Mari J.; Campbell, Todd; Camargo Neto, Joao; Astillo, Philip
2016-05-01
One of the major criteria used for advancing experimental lines in a breeding program is yield performance. Obtaining yield performance data requires machine picking each plot with a cotton picker, modified to weigh individual plots. Harvesting thousands of small field plots requires a great deal of time and resources. The efficiency of cotton breeding could be increased significantly while the cost could be decreased with the availability of accurate methods to predict yield performance. This work is investigating the feasibility of using an image processing technique using a commercial off-the-shelf (COTS) camera mounted on a small Unmanned Aerial Vehicle (sUAV) to collect normal RGB images in predicting cotton yield on small plot. An orthonormal image was generated from multiple images and used to process multiple, segmented plots. A Gaussian blur was used to eliminate the high frequency component of the images, which corresponds to the cotton pixels, and used image subtraction technique to generate high frequency pixel images. The cotton pixels were then separated using k-means cluster with 5 classes. Based on the current work, the calculated percentage cotton area was computed using the generated high frequency image (cotton pixels) divided by the total area of the plot. Preliminary results showed (five flights, 3 altitudes) that cotton cover on multiple pre-selected 227 sq. m. plots produce an average of 8% which translate to approximately 22.3 kgs. of cotton. The yield prediction equation generated from the test site was then use on a separate validation site and produced a prediction error of less than 10%. In summary, the results indicate that a COTS camera with an appropriate image processing technique can produce results that are comparable to expensive sensors.
Efficient and automatic image reduction framework for space debris detection based on GPU technology
NASA Astrophysics Data System (ADS)
Diprima, Francesco; Santoni, Fabio; Piergentili, Fabrizio; Fortunato, Vito; Abbattista, Cristoforo; Amoruso, Leonardo
2018-04-01
In the last years, the increasing number of space debris has triggered the need of a distributed monitoring system for the prevention of possible space collisions. Space surveillance based on ground telescope allows the monitoring of the traffic of the Resident Space Objects (RSOs) in the Earth orbit. This space debris surveillance has several applications such as orbit prediction and conjunction assessment. In this paper is proposed an optimized and performance-oriented pipeline for sources extraction intended to the automatic detection of space debris in optical data. The detection method is based on the morphological operations and Hough Transform for lines. Near real-time detection is obtained using General Purpose computing on Graphics Processing Units (GPGPU). The high degree of processing parallelism provided by GPGPU allows to split data analysis over thousands of threads in order to process big datasets with a limited computational time. The implementation has been tested on a large and heterogeneous images data set, containing both imaging satellites from different orbit ranges and multiple observation modes (i.e. sidereal and object tracking). These images were taken during an observation campaign performed from the EQUO (EQUatorial Observatory) observatory settled at the Broglio Space Center (BSC) in Kenya, which is part of the ASI-Sapienza Agreement.
NASA Technical Reports Server (NTRS)
1988-01-01
The charters of Freedom Monitoring System will periodically assess the physical condition of the U.S. Constitution, Declaration of Independence and Bill of Rights. Although protected in helium filled glass cases, the documents are subject to damage from light vibration and humidity. The photometer is a CCD detector used as the electronic film for the camera system's scanning camera which mechanically scans the document line by line and acquires a series of images, each representing a one square inch portion of the document. Perkin-Elmer Corporation's photometer is capable of detecting changes in contrast, shape or other indicators of degradation with 5 to 10 times the sensitivity of the human eye. A Vicom image processing computer receives the data from the photometer stores it and manipulates it, allowing comparison of electronic images over time to detect changes.