Goal-oriented evaluation of binarization algorithms for historical document images
NASA Astrophysics Data System (ADS)
Obafemi-Ajayi, Tayo; Agam, Gady
2013-01-01
Binarization is of significant importance in document analysis systems. It is an essential first step, prior to further stages such as Optical Character Recognition (OCR), document segmentation, or enhancement of readability of the document after some restoration stages. Hence, proper evaluation of binarization methods to verify their effectiveness is of great value to the document analysis community. In this work, we perform a detailed goal-oriented evaluation of image quality assessment of the 18 binarization methods that participated in the DIBCO 2011 competition using the 16 historical document test images used in the contest. We are interested in the image quality assessment of the outputs generated by the different binarization algorithms as well as the OCR performance, where possible. We compare our evaluation of the algorithms based on human perception of quality to the DIBCO evaluation metrics. The results obtained provide an insight into the effectiveness of these methods with respect to human perception of image quality as well as OCR performance.
Performance evaluation methodology for historical document image binarization.
Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis
2013-02-01
Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.
Document image cleanup and binarization
NASA Astrophysics Data System (ADS)
Wu, Victor; Manmatha, Raghaven
1998-04-01
Image binarization is a difficult task for documents with text over textured or shaded backgrounds, poor contrast, and/or considerable noise. Current optical character recognition (OCR) and document analysis technology do not handle such documents well. We have developed a simple yet effective algorithm for document image clean-up and binarization. The algorithm consists of two basic steps. In the first step, the input image is smoothed using a low-pass filter. The smoothing operation enhances the text relative to any background texture. This is because background texture normally has higher frequency than text does. The smoothing operation also removes speckle noise. In the second step, the intensity histogram of the smoothed image is computed and a threshold automatically selected as follows. For black text, the first peak of the histogram corresponds to text. Thresholding the image at the value of the valley between the first and second peaks of the histogram binarizes the image well. In order to reliably identify the valley, the histogram is smoothed by a low-pass filter before the threshold is computed. The algorithm has been applied to some 50 images from a wide variety of source: digitized video frames, photos, newspapers, advertisements in magazines or sales flyers, personal checks, etc. There are 21820 characters and 4406 words in these images. 91 percent of the characters and 86 percent of the words are successfully cleaned up and binarized. A commercial OCR was applied to the binarized text when it consisted of fonts which were OCR recognizable. The recognition rate was 84 percent for the characters and 77 percent for the words.
Writer identification on historical Glagolitic documents
NASA Astrophysics Data System (ADS)
Fiel, Stefan; Hollaus, Fabian; Gau, Melanie; Sablatnig, Robert
2013-12-01
This work aims at automatically identifying scribes of historical Slavonic manuscripts. The quality of the ancient documents is partially degraded by faded-out ink or varying background. The writer identification method used is based on image features, which are described with Scale Invariant Feature Transform (SIFT) features. A visual vocabulary is used for the description of handwriting characteristics, whereby the features are clustered using a Gaussian Mixture Model and employing the Fisher kernel. The writer identification approach is originally designed for grayscale images of modern handwritings. But contrary to modern documents, the historical manuscripts are partially corrupted by background clutter and water stains. As a result, SIFT features are also found on the background. Since the method shows also good results on binarized images of modern handwritings, the approach was additionally applied on binarized images of the ancient writings. Experiments show that this preprocessing step leads to a significant performance increase: The identification rate on binarized images is 98.9%, compared to an identification rate of 87.6% gained on grayscale images.
Document image binarization using "multi-scale" predefined filters
NASA Astrophysics Data System (ADS)
Saabni, Raid M.
2018-04-01
Reading text or searching for key words within a historical document is a very challenging task. one of the first steps of the complete task is binarization, where we separate foreground such as text, figures and drawings from the background. Successful results of this important step in many cases can determine next steps to success or failure, therefore it is very vital to the success of the complete task of reading and analyzing the content of a document image. Generally, historical documents images are of poor quality due to their storage condition and degradation over time, which mostly cause to varying contrasts, stains, dirt and seeping ink from reverse side. In this paper, we use banks of anisotropic predefined filters in different scales and orientations to develop a binarization method for degraded documents and manuscripts. Using the fact, that handwritten strokes may follow different scales and orientations, we use predefined sets of filter banks having various scales, weights, and orientations to seek a compact set of filters and weights in order to generate diffrent layers of foregrounds and background. Results of convolving these fiters on the gray level image locally, weighted and accumulated to enhance the original image. Based on the different layers, seeds of components in the gray level image and a learning process, we present an improved binarization algorithm to separate the background from layers of foreground. Different layers of foreground which may be caused by seeping ink, degradation or other factors are also separated from the real foreground in a second phase. Promising experimental results were obtained on the DIBCO2011 , DIBCO2013 and H-DIBCO2016 data sets and a collection of images taken from real historical documents.
Robust binarization of degraded document images using heuristics
NASA Astrophysics Data System (ADS)
Parker, Jon; Frieder, Ophir; Frieder, Gideon
2013-12-01
Historically significant documents are often discovered with defects that make them difficult to read and analyze. This fact is particularly troublesome if the defects prevent software from performing an automated analysis. Image enhancement methods are used to remove or minimize document defects, improve software performance, and generally make images more legible. We describe an automated, image enhancement method that is input page independent and requires no training data. The approach applies to color or greyscale images with hand written script, typewritten text, images, and mixtures thereof. We evaluated the image enhancement method against the test images provided by the 2011 Document Image Binarization Contest (DIBCO). Our method outperforms all 2011 DIBCO entrants in terms of average F1 measure - doing so with a significantly lower variance than top contest entrants. The capability of the proposed method is also illustrated using select images from a collection of historic documents stored at Yad Vashem Holocaust Memorial in Israel.
Combining multiple thresholding binarization values to improve OCR output
NASA Astrophysics Data System (ADS)
Lund, William B.; Kennard, Douglas J.; Ringger, Eric K.
2013-01-01
For noisy, historical documents, a high optical character recognition (OCR) word error rate (WER) can render the OCR text unusable. Since image binarization is often the method used to identify foreground pixels, a body of research seeks to improve image-wide binarization directly. Instead of relying on any one imperfect binarization technique, our method incorporates information from multiple simple thresholding binarizations of the same image to improve text output. Using a new corpus of 19th century newspaper grayscale images for which the text transcription is known, we observe WERs of 13.8% and higher using current binarization techniques and a state-of-the-art OCR engine. Our novel approach combines the OCR outputs from multiple thresholded images by aligning the text output and producing a lattice of word alternatives from which a lattice word error rate (LWER) is calculated. Our results show a LWER of 7.6% when aligning two threshold images and a LWER of 6.8% when aligning five. From the word lattice we commit to one hypothesis by applying the methods of Lund et al. (2011) achieving an improvement over the original OCR output and a 8.41% WER result on this data set.
Real-time text extraction based on the page layout analysis system
NASA Astrophysics Data System (ADS)
Soua, M.; Benchekroun, A.; Kachouri, R.; Akil, M.
2017-05-01
Several approaches were proposed in order to extract text from scanned documents. However, text extraction in heterogeneous documents stills a real challenge. Indeed, text extraction in this context is a difficult task because of the variation of the text due to the differences of sizes, styles and orientations, as well as to the complexity of the document region background. Recently, we have proposed the improved hybrid binarization based on Kmeans method (I-HBK)5 to extract suitably the text from heterogeneous documents. In this method, the Page Layout Analysis (PLA), part of the Tesseract OCR engine, is used to identify text and image regions. Afterwards our hybrid binarization is applied separately on each kind of regions. In one side, gamma correction is employed before to process image regions. In the other side, binarization is performed directly on text regions. Then, a foreground and background color study is performed to correct inverted region colors. Finally, characters are located from the binarized regions based on the PLA algorithm. In this work, we extend the integration of the PLA algorithm within the I-HBK method. In addition, to speed up the separation of text and image step, we employ an efficient GPU acceleration. Through the performed experiments, we demonstrate the high F-measure accuracy of the PLA algorithm reaching 95% on the LRDE dataset. In addition, we illustrate the sequential and the parallel compared PLA versions. The obtained results give a speedup of 3.7x when comparing the parallel PLA implementation on GPU GTX 660 to the CPU version.
Binarization algorithm for document image with complex background
NASA Astrophysics Data System (ADS)
Miao, Shaojun; Lu, Tongwei; Min, Feng
2015-12-01
The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.
An improved TV caption image binarization method
NASA Astrophysics Data System (ADS)
Jiang, Mengdi; Cheng, Jianghua; Chen, Minghui; Ku, Xishu
2018-04-01
TV Video caption image binarization has important influence on semantic video retrieval. An improved binarization method for caption image is proposed in this paper. In order to overcome the shortcomings of ghost and broken strokes problems of traditional Niblack method, the method has considered the global information of the images and the local information of the images. First, Tradition Otsu and Niblack thresholds are used for initial binarization. Second, we introduced the difference between maximum and minimum values in the local window as a third threshold to generate two images. Finally, with a logic AND operation of the two images, great results were obtained. The experiment results prove that the proposed method is reliable and effective.
A Novel Binarization Algorithm for Ballistics Firearm Identification
NASA Astrophysics Data System (ADS)
Li, Dongguang
The identification of ballistics specimens from imaging systems is of paramount importance in criminal investigation. Binarization plays a key role in preprocess of recognizing cartridges in the ballistic imaging systems. Unfortunately, it is very difficult to get the satisfactory binary image using existing binary algorithms. In this paper, we utilize the global and local thresholds to enhance the image binarization. Importantly, we present a novel criterion for effectively detecting edges in the images. Comprehensive experiments have been conducted over sample ballistic images. The empirical results demonstrate the proposed method can provide a better solution than existing binary algorithms.
Adaptive target binarization method based on a dual-camera system
NASA Astrophysics Data System (ADS)
Lei, Jing; Zhang, Ping; Xu, Jiangtao; Gao, Zhiyuan; Gao, Jing
2018-01-01
An adaptive target binarization method based on a dual-camera system that contains two dynamic vision sensors was proposed. First, a preprocessing procedure of denoising is introduced to remove the noise events generated by the sensors. Then, the complete edge of the target is retrieved and represented by events based on an event mosaicking method. Third, the region of the target is confirmed by an event-to-event method. Finally, a postprocessing procedure of image open and close operations of morphology methods is adopted to remove the artifacts caused by event-to-event mismatching. The proposed binarization method has been extensively tested on numerous degraded images with nonuniform illumination, low contrast, noise, or light spots and successfully compared with other well-known binarization methods. The experimental results, which are based on visual and misclassification error criteria, show that the proposed method performs well and has better robustness on the binarization of degraded images.
Degraded Chinese rubbing images thresholding based on local first-order statistics
NASA Astrophysics Data System (ADS)
Wang, Fang; Hou, Ling-Ying; Huang, Han
2017-06-01
It is a necessary step for Chinese character segmentation from degraded document images in Optical Character Recognizer (OCR); however, it is challenging due to various kinds of noising in such an image. In this paper, we present three local first-order statistics method that had been adaptive thresholding for segmenting text and non-text of Chinese rubbing image. Both visual inspection and numerically investigate for the segmentation results of rubbing image had been obtained. In experiments, it obtained better results than classical techniques in the binarization of real Chinese rubbing image and PHIBD 2012 datasets.
Automatic and efficient methods applied to the binarization of a subway map
NASA Astrophysics Data System (ADS)
Durand, Philippe; Ghorbanzadeh, Dariush; Jaupi, Luan
2015-12-01
The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.
Recognizing characters of ancient manuscripts
NASA Astrophysics Data System (ADS)
Diem, Markus; Sablatnig, Robert
2010-02-01
Considering printed Latin text, the main issues of Optical Character Recognition (OCR) systems are solved. However, for degraded handwritten document images, basic preprocessing steps such as binarization, gain poor results with state-of-the-art methods. In this paper ancient Slavonic manuscripts from the 11th century are investigated. In order to minimize the consequences of false character segmentation, a binarization-free approach based on local descriptors is proposed. Additionally local information allows the recognition of partially visible or washed out characters. The proposed algorithm consists of two steps: character classification and character localization. Initially Scale Invariant Feature Transform (SIFT) features are extracted which are subsequently classified using Support Vector Machines (SVM). Afterwards, the interest points are clustered according to their spatial information. Thereby, characters are localized and finally recognized based on a weighted voting scheme of pre-classified local descriptors. Preliminary results show that the proposed system can handle highly degraded manuscript images with background clutter (e.g. stains, tears) and faded out characters.
Intelligent identification of remnant ridge edges in region west of Yongxing Island, South China Sea
NASA Astrophysics Data System (ADS)
Wang, Weiwei; Guo, Jing; Cai, Guanqiang; Wang, Dawei
2018-02-01
Edge detection enables identification of geomorphologic unit boundaries and thus assists with geomorphical mapping. In this paper, an intelligent edge identification method is proposed and image processing techniques are applied to multi-beam bathymetry data. To accomplish this, a color image is generated by the bathymetry, and a weighted method is used to convert the color image to a gray image. As the quality of the image has a significant influence on edge detection, different filter methods are applied to the gray image for de-noising. The peak signal-to-noise ratio and mean square error are calculated to evaluate which filter method is most appropriate for depth image filtering and the edge is subsequently detected using an image binarization method. Traditional image binarization methods cannot manage the complicated uneven seafloor, and therefore a binarization method is proposed that is based on the difference between image pixel values; the appropriate threshold for image binarization is estimated according to the probability distribution of pixel value differences between two adjacent pixels in horizontal and vertical directions, respectively. Finally, an eight-neighborhood frame is adopted to thin the binary image, connect the intermittent edge, and implement contour extraction. Experimental results show that the method described here can recognize the main boundaries of geomorphologic units. In addition, the proposed automatic edge identification method avoids use of subjective judgment, and reduces time and labor costs.
Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)
2007-01-01
A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.
Image Processing for Binarization Enhancement via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor)
2009-01-01
A technique for enhancing a gray-scale image to improve conversions of the image to binary employs fuzzy reasoning. In the technique, pixels in the image are analyzed by comparing the pixel's gray scale value, which is indicative of its relative brightness, to the values of pixels immediately surrounding the selected pixel. The degree to which each pixel in the image differs in value from the values of surrounding pixels is employed as the variable in a fuzzy reasoning-based analysis that determines an appropriate amount by which the selected pixel's value should be adjusted to reduce vagueness and ambiguity in the image and improve retention of information during binarization of the enhanced gray-scale image.
NASA Astrophysics Data System (ADS)
Takemine, S.; Rikimaru, A.; Takahashi, K.
The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed
Segmenting texts from outdoor images taken by mobile phones using color features
NASA Astrophysics Data System (ADS)
Liu, Zongyi; Zhou, Hanning
2011-01-01
Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
A method of detection to the grinding wheel layer thickness based on computer vision
NASA Astrophysics Data System (ADS)
Ji, Yuchen; Fu, Luhua; Yang, Dujuan; Wang, Lei; Liu, Changjie; Wang, Zhong
2018-01-01
This paper proposed a method of detection to the grinding wheel layer thickness based on computer vision. A camera is used to capture images of grinding wheel layer on the whole circle. Forward lighting and back lighting are used to enables a clear image to be acquired. Image processing is then executed on the images captured, which consists of image preprocessing, binarization and subpixel subdivision. The aim of binarization is to help the location of a chord and the corresponding ring width. After subpixel subdivision, the thickness of the grinding layer can be calculated finally. Compared with methods usually used to detect grinding wheel wear, method in this paper can directly and quickly get the information of thickness. Also, the eccentric error and the error of pixel equivalent are discussed in this paper.
NASA Astrophysics Data System (ADS)
Attallah, Bilal; Serir, Amina; Chahir, Youssef; Boudjelal, Abdelwahhab
2017-11-01
Palmprint recognition systems are dependent on feature extraction. A method of feature extraction using higher discrimination information was developed to characterize palmprint images. In this method, two individual feature extraction techniques are applied to a discrete wavelet transform of a palmprint image, and their outputs are fused. The two techniques used in the fusion are the histogram of gradient and the binarized statistical image features. They are then evaluated using an extreme learning machine classifier before selecting a feature based on principal component analysis. Three palmprint databases, the Hong Kong Polytechnic University (PolyU) Multispectral Palmprint Database, Hong Kong PolyU Palmprint Database II, and the Delhi Touchless (IIDT) Palmprint Database, are used in this study. The study shows that our method effectively identifies and verifies palmprints and outperforms other methods based on feature extraction.
NASA Astrophysics Data System (ADS)
Zhai, Xiaojun; Bensaali, Faycal; Sotudeh, Reza
2013-01-01
Number plate (NP) binarization and adjustment are important preprocessing stages in automatic number plate recognition (ANPR) systems and are used to link the number plate localization (NPL) and character segmentation stages. Successfully linking these two stages will improve the performance of the entire ANPR system. We present two optimized low-complexity NP binarization and adjustment algorithms. Efficient area/speed architectures based on the proposed algorithms are also presented and have been successfully implemented and tested using the Mentor Graphics RC240 FPGA development board, which together require only 9% of the available on-chip resources of a Virtex-4 FPGA, run with a maximum frequency of 95.8 MHz and are capable of processing one image in 0.07 to 0.17 ms.
Axial segmentation of lungs CT scan images using canny method and morphological operation
NASA Astrophysics Data System (ADS)
Noviana, Rina; Febriani, Rasal, Isram; Lubis, Eva Utari Cintamurni
2017-08-01
Segmentation is a very important topic in digital image process. It is found simply in varied fields of image analysis, particularly within the medical imaging field. Axial segmentation of lungs CT scan is beneficial in designation of abnormalities and surgery planning. It will do to ascertain every section within the lungs. The results of the segmentation are accustomed discover the presence of nodules. The method which utilized in this analysis are image cropping, image binarization, Canny edge detection and morphological operation. Image cropping is done so as to separate the lungs areas, that is the region of interest. Binarization method generates a binary image that has 2 values with grey level, that is black and white (ROI), from another space of lungs CT scan image. Canny method used for the edge detection. Morphological operation is applied to smoothing the lungs edge. The segmentation methodology shows an honest result. It obtains an awfully smooth edge. Moreover, the image background can also be removed in order to get the main focus, the lungs.
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-06-30
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.
Document Form and Character Recognition using SVM
NASA Astrophysics Data System (ADS)
Park, Sang-Sung; Shin, Young-Geun; Jung, Won-Kyo; Ahn, Dong-Kyu; Jang, Dong-Sik
2009-08-01
Because of development of computer and information communication, EDI (Electronic Data Interchange) has been developing. There is OCR (Optical Character Recognition) of Pattern recognition technology for EDI. OCR contributed to changing many manual in the past into automation. But for the more perfect database of document, much manual is needed for excluding unnecessary recognition. To resolve this problem, we propose document form based character recognition method in this study. Proposed method is divided into document form recognition part and character recognition part. Especially, in character recognition, change character into binarization by using SVM algorithm and extract more correct feature value.
Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices
NASA Astrophysics Data System (ADS)
Sentana, I. W. B.; Jawas, N.; Asri, S. A.
2018-01-01
Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.
Development of OCR system for portable passport and visa reader
NASA Astrophysics Data System (ADS)
Visilter, Yury V.; Zheltov, Sergey Y.; Lukin, Anton A.
1999-01-01
The modern passport and visa documents include special machine-readable zones satisfied the ICAO standards. This allows to develop the special passport and visa automatic readers. However, there are some special problems in such OCR systems: low resolution of character images captured by CCD-camera (down to 150 dpi), essential shifts and slopes (up to 10 degrees), rich paper texture under the character symbols, non-homogeneous illumination. This paper presents the structure and some special aspects of OCR system for portable passport and visa reader. In our approach the binarization procedure is performed after the segmentation step, and it is applied to the each character site separately. Character recognition procedure uses the structural information of machine-readable zone. Special algorithms are developed for machine-readable zone extraction and character segmentation.
A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-01-01
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods. PMID:28665361
Asymmetric distances for binary embeddings.
Gordo, Albert; Perronnin, Florent; Gong, Yunchao; Lazebnik, Svetlana
2014-01-01
In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes that binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances that are applicable to a wide variety of embedding techniques including locality sensitive hashing (LSH), locality sensitive binary codes (LSBC), spectral hashing (SH), PCA embedding (PCAE), PCAE with random rotations (PCAE-RR), and PCAE with iterative quantization (PCAE-ITQ). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques.
Threshold selection for classification of MR brain images by clustering method
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-01
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Qualitative and quantitative interpretation of SEM image using digital image processing.
Saladra, Dawid; Kopernik, Magdalena
2016-10-01
The aim of the this study is improvement of qualitative and quantitative analysis of scanning electron microscope micrographs by development of computer program, which enables automatic crack analysis of scanning electron microscopy (SEM) micrographs. Micromechanical tests of pneumatic ventricular assist devices result in a large number of micrographs. Therefore, the analysis must be automatic. Tests for athrombogenic titanium nitride/gold coatings deposited on polymeric substrates (Bionate II) are performed. These tests include microshear, microtension and fatigue analysis. Anisotropic surface defects observed in the SEM micrographs require support for qualitative and quantitative interpretation. Improvement of qualitative analysis of scanning electron microscope images was achieved by a set of computational tools that includes binarization, simplified expanding, expanding, simple image statistic thresholding, the filters Laplacian 1, and Laplacian 2, Otsu and reverse binarization. Several modifications of the known image processing techniques and combinations of the selected image processing techniques were applied. The introduced quantitative analysis of digital scanning electron microscope images enables computation of stereological parameters such as area, crack angle, crack length, and total crack length per unit area. This study also compares the functionality of the developed computer program of digital image processing with existing applications. The described pre- and postprocessing may be helpful in scanning electron microscopy and transmission electron microscopy surface investigations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Threshold selection for classification of MR brain images by clustering method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less
Mobile-based text recognition from water quality devices
NASA Astrophysics Data System (ADS)
Dhakal, Shanti; Rahnemoonfar, Maryam
2015-03-01
Measuring water quality of bays, estuaries, and gulfs is a complicated and time-consuming process. YSI Sonde is an instrument used to measure water quality parameters such as pH, temperature, salinity, and dissolved oxygen. This instrument is taken to water bodies in a boat trip and researchers note down different parameters displayed by the instrument's display monitor. In this project, a mobile application is developed for Android platform that allows a user to take a picture of the YSI Sonde monitor, extract text from the image and store it in a file on the phone. The image captured by the application is first processed to remove perspective distortion. Probabilistic Hough line transform is used to identify lines in the image and the corner of the image is then obtained by determining the intersection of the detected horizontal and vertical lines. The image is warped using the perspective transformation matrix, obtained from the corner points of the source image and the destination image, hence, removing the perspective distortion. Mathematical morphology operation, black-hat is used to correct the shading of the image. The image is binarized using Otsu's binarization technique and is then passed to the Optical Character Recognition (OCR) software for character recognition. The extracted information is stored in a file on the phone and can be retrieved later for analysis. The algorithm was tested on 60 different images of YSI Sonde with different perspective features and shading. Experimental results, in comparison to ground-truth results, demonstrate the effectiveness of the proposed method.
Lane Marking Detection and Reconstruction with Line-Scan Imaging Data.
Li, Lin; Luo, Wenting; Wang, Kelvin C P
2018-05-20
A bstract: Lane marking detection and localization are crucial for autonomous driving and lane-based pavement surveys. Numerous studies have been done to detect and locate lane markings with the purpose of advanced driver assistance systems, in which image data are usually captured by vision-based cameras. However, a limited number of studies have been done to identify lane markings using high-resolution laser images for road condition evaluation. In this study, the laser images are acquired with a digital highway data vehicle (DHDV). Subsequently, a novel methodology is presented for the automated lane marking identification and reconstruction, and is implemented in four phases: (1) binarization of the laser images with a new threshold method (multi-box segmentation based threshold method); (2) determination of candidate lane markings with closing operations and a marching square algorithm; (3) identification of true lane marking by eliminating false positives (FPs) using a linear support vector machine method; and (4) reconstruction of the damaged and dash lane marking segments to form a continuous lane marking based on the geometry features such as adjacent lane marking location and lane width. Finally, a case study is given to validate effects of the novel methodology. The findings indicate the new strategy is robust in image binarization and lane marking localization. This study would be beneficial in road lane-based pavement condition evaluation such as lane-based rutting measurement and crack classification.
Segmentation of financial seals and its implementation on a DSP-based system
NASA Astrophysics Data System (ADS)
He, Jin; Liu, Tiegen; Guo, Jingjing; Zhang, Hao
2009-11-01
Automatic seal imprint identification is an important part of modern financial security. Accurate segmentation is the basis of correct identification. In this paper, a DSP (digital signal processor) based identification system was designed, and an adaptive algorithm was proposed to extract binary seal images from financial instruments. As the kernel of the identification system, a DSP chip of TMS320DM642 was used to implement image processing, controlling and coordinating works of each system module. The proposed algorithm consisted of three stages, including extraction of grayscale seal image, denoising and binarization. A grayscale seal image was extracted by color transform from a financial instrument image. Adaptive morphological operations were used to highlight details of the extracted grayscale seal image and smooth the background. After median filter for noise elimination, the filtered seal image was binarized by Otsu's method. The algorithm was developed based on the DSP development environment CCS and real-time operation system DSP/BIOS. To simplify the implementation of the proposed algorithm, the calibration of white balance and the coarse positioning of the seal imprint were implemented by TMS320DM642 controlling image acquisition. IMGLIB of TMS320DM642 was used for the efficiency improvement. The experiment result showed that financial seal imprints, even with intricate and dense strokes can be correctly segmented by the proposed algorithm. Adhesion and incompleteness distortions in the segmentation results were reduced, even when the original seal imprint had a poor quality.
A fuzzy optimal threshold technique for medical images
NASA Astrophysics Data System (ADS)
Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.
2012-01-01
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.
Marinozzi, Franco; Bini, Fabiano; Marinozzi, Andrea; Zuppante, Francesca; De Paolis, Annalisa; Pecci, Raffaella; Bedini, Rossella
2013-01-01
Micro-CT analysis is a powerful technique for a non-invasive evaluation of the morphometric parameters of trabecular bone samples. This elaboration requires a previous binarization of the images. A problem which arises from the binarization process is the partial volume artifact. Voxels at the external surface of the sample can contain both bone and air so thresholding operates an incorrect estimation of volume occupied by the two materials. The aim of this study is the extraction of bone volumetric information directly from the image histograms, by fitting them with a suitable set of functions. Nineteen trabecular bone samples were extracted from femoral heads of eight patients subject to a hip arthroplasty surgery. Trabecular bone samples were acquired using micro-CT Scanner. Hystograms of the acquired images were computed and fitted by Gaussian-like functions accounting for: a) gray levels produced by the bone x-ray absorption, b) the portions of the image occupied by air and c) voxels that contain a mixture of bone and air. This latter contribution can be considered such as an estimation of the partial volume effect. The comparison of the proposed technique to the bone volumes measured by a reference instrument such as by a helium pycnometer show the method as a good way for an accurate bone volume calculation of trabecular bone samples.
Automatic target detection using binary template matching
NASA Astrophysics Data System (ADS)
Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook
2005-03-01
This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.
Yin, Xiaoxia; Ng, Brian W-H; He, Jing; Zhang, Yanchun; Abbott, Derek
2014-01-01
In this paper, we demonstrate a comprehensive method for segmenting the retinal vasculature in camera images of the fundus. This is of interest in the area of diagnostics for eye diseases that affect the blood vessels in the eye. In a departure from other state-of-the-art methods, vessels are first pre-grouped together with graph partitioning, using a spectral clustering technique based on morphological features. Local curvature is estimated over the whole image using eigenvalues of Hessian matrix in order to enhance the vessels, which appear as ridges in images of the retina. The result is combined with a binarized image, obtained using a threshold that maximizes entropy, to extract the retinal vessels from the background. Speckle type noise is reduced by applying a connectivity constraint on the extracted curvature based enhanced image. This constraint is varied over the image according to each region's predominant blood vessel size. The resultant image exhibits the central light reflex of retinal arteries and veins, which prevents the segmentation of whole vessels. To address this, the earlier entropy-based binarization technique is repeated on the original image, but crucially, with a different threshold to incorporate the central reflex vessels. The final segmentation is achieved by combining the segmented vessels with and without central light reflex. We carry out our approach on DRIVE and REVIEW, two publicly available collections of retinal images for research purposes. The obtained results are compared with state-of-the-art methods in the literature using metrics such as sensitivity (true positive rate), selectivity (false positive rate) and accuracy rates for the DRIVE images and measured vessel widths for the REVIEW images. Our approach out-performs the methods in the literature. PMID:24781033
NASA Astrophysics Data System (ADS)
Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab
2017-01-01
Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.
Optimal sampling with prior information of the image geometry in microfluidic MRI.
Han, S H; Cho, H; Paulsen, J L
2015-03-01
Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.
An Approach towards Ultrasound Kidney Cysts Detection using Vector Graphic Image Analysis
NASA Astrophysics Data System (ADS)
Mahmud, Wan Mahani Hafizah Wan; Supriyanto, Eko
2017-08-01
This study develops new approach towards detection of kidney ultrasound image for both with single cyst as well as multiple cysts. 50 single cyst images and 25 multiple cysts images were used to test the developed algorithm. Steps involved in developing this algorithm were vector graphic image formation and analysis, thresholding, binarization, filtering as well as roundness test. Performance evaluation to 50 single cyst images gave accuracy of 92%, while for multiple cysts images, the accuracy was about 86.89% when tested to 25 multiple cysts images. This developed algorithm may be used in developing a computerized system such as computer aided diagnosis system to help medical experts in diagnosis of kidney cysts.
New Finger Biometric Method Using Near Infrared Imaging
Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul
2011-01-01
In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741
Research on pre-processing of QR Code
NASA Astrophysics Data System (ADS)
Sun, Haixing; Xia, Haojie; Dong, Ning
2013-10-01
QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.
Miyata, Tomohiro; Mizoguchi, Teruyasu
2018-03-01
Understanding structures and spatial distributions of molecules in liquid phases is crucial for the control of liquid properties and to develop efficient liquid-phase processes. Here, real-space mapping of molecular distributions in a liquid was performed. Specifically, the ionic liquid 1-Ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide (C2mimTFSI) was imaged using atomic-resolution scanning transmission electron microscopy. Simulations revealed network-like bright regions in the images that were attributed to the TFSI- anion, with minimal contributions from the C2mim+ cation. Simple visualization of the TFSI- distribution in the liquid sample was achieved by binarizing the experimental image.
Functional Brain Networks: Does the Choice of Dependency Estimator and Binarization Method Matter?
NASA Astrophysics Data System (ADS)
Jalili, Mahdi
2016-07-01
The human brain can be modelled as a complex networked structure with brain regions as individual nodes and their anatomical/functional links as edges. Functional brain networks are constructed by first extracting weighted connectivity matrices, and then binarizing them to minimize the noise level. Different methods have been used to estimate the dependency values between the nodes and to obtain a binary network from a weighted connectivity matrix. In this work we study topological properties of EEG-based functional networks in Alzheimer’s Disease (AD). To estimate the connectivity strength between two time series, we use Pearson correlation, coherence, phase order parameter and synchronization likelihood. In order to binarize the weighted connectivity matrices, we use Minimum Spanning Tree (MST), Minimum Connected Component (MCC), uniform threshold and density-preserving methods. We find that the detected AD-related abnormalities highly depend on the methods used for dependency estimation and binarization. Topological properties of networks constructed using coherence method and MCC binarization show more significant differences between AD and healthy subjects than the other methods. These results might explain contradictory results reported in the literature for network properties specific to AD symptoms. The analysis method should be seriously taken into account in the interpretation of network-based analysis of brain signals.
Matsumoto, Yuji; Takaki, Yasuhiro
2014-06-15
Horizontally scanning holography can enlarge both screen size and viewing zone angle. A microelectromechanical-system spatial light modulator, which can generate only binary images, is used to generate hologram patterns. Thus, techniques to improve gray-scale representation in reconstructed images should be developed. In this study, the error diffusion technique was used for the binarization of holograms. When the Floyd-Steinberg error diffusion coefficients were used, gray-scale representation was improved. However, the linearity in the gray-scale representation was not satisfactory. We proposed the use of a correction table and showed that the linearity was greatly improved.
PARALLAX AND ORBITAL EFFECTS IN ASTROMETRIC MICROLENSING WITH BINARY SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nucita, A. A.; Paolis, F. De; Ingrosso, G.
2016-06-01
In gravitational microlensing, binary systems may act as lenses or sources. Identifying lens binarity is generally easy, in particular in events characterized by caustic crossing since the resulting light curve exhibits strong deviations from a smooth single-lensing light curve. In contrast, light curves with minor deviations from a Paczyński behavior do not allow one to identify the source binarity. A consequence of gravitational microlensing is the shift of the position of the multiple image centroid with respect to the source star location — the so-called astrometric microlensing signal. When the astrometric signal is considered, the presence of a binary sourcemore » manifests with a path that largely differs from that expected for single source events. Here, we investigate the astrometric signatures of binary sources taking into account their orbital motion and the parallax effect due to the Earth’s motion, which turn out not to be negligible in most cases. We also show that considering the above-mentioned effects is important in the analysis of astrometric data in order to correctly estimate the lens-event parameters.« less
Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes
NASA Astrophysics Data System (ADS)
Kim, Ki Wan; Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Lee, Eui Chul; Park, Kang Ryoung
2015-03-01
The classification of eye openness and closure has been researched in various fields, e.g., driver drowsiness detection, physiological status analysis, and eye fatigue measurement. For a classification with high accuracy, accurate segmentation of the eye region is required. Most previous research used the segmentation method by image binarization on the basis that the eyeball is darker than skin, but the performance of this approach is frequently affected by thick eyelashes or shadows around the eye. Thus, we propose a fuzzy-based method for classifying eye openness and closure. First, the proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. Second, the eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadows around the eye. The combined image of I and K pixels is obtained through the fuzzy logic system. Third, in order to reflect the effect by all the inference values on calculating the output score of the fuzzy system, we use the revised weighted average method, where all the rectangular regions by all the inference values are considered for calculating the output score. Fourth, the classification of eye openness or closure is successfully made by the proposed fuzzy-based method with eye images of low resolution which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the chosen database. Experimental results with two databases of eye images show that our method is superior to previous approaches.
Dynamic Pore-Scale Imaging of Reactive Transport in Heterogeneous Carbonates at Reservior Conditions
NASA Astrophysics Data System (ADS)
Menke, Hannah; Bijeljic, Branko; Andrew, Matthew; Blunt, Martin
2014-05-01
Sequestering carbon in deep geologic formations is one way of reducing anthropogenic CO2 emissions. Carbon capture, Utilization, and Storage (CCUS) in carbonate reservoirs has the added benefit of mobilizing more oil for extraction, increasing oil reservoir yield, and generating revenue while also mitigating climate change. The magnitude, speed, and type of dissolution are dependent the intrinsic properties of the rock. Understanding how small changes in the pore structure affect dissolution is paramount for successful predictive modelling both on the pore-scale and for up-scaled reservoir simulations. We propose an experimental method whereby both 'Pink Beam' synchrotron radiation and a Micro-CT lab source are used in dynamic X-ray microtomography to investigate the pore structure changes in carbonate rocks of varying heterogeneity at high temperatures and pressures. Four carbonate rock types were studied, two relatively homogeneous carbonates, Ketton and Mt. Gambier, and two very heterogeneous carbonates, Estalliades and Portland Basebed. Each rock type was imaged under the same reservoir and flow conditions to gain insight into the impact of heterogeneity. A 4-mm carbonate core was injected with CO2-saturated brine at 10 MPa and 50oC for 2 hours. Depending on sample heterogeneity and X-ray source, tomographic images were taken at between 30-second and 20-minute time-resolutions and a 4-micron spatial resolution during injection. Changes in porosity, permeability, and structure were obtained by first binning and filtering the images, then binarizing them with watershed segmentation, and finally extracting a pore/throat network. Furthermore, pore-scale flow modelling was performed directly on the binarized image and used to track velocity distributions as the pore network evolved. Significant differences in dissolution type and magnitude were found for each rock type. The most homogeneous carbonate, Ketton, was seen to have predominately uniform dissolution with minor dissolution rate differences between the pores and pore throats. This was not true for the heterogeneous carbonates, Estalliades and Portland Basebed, which formed wormholes. Pore-scale modelling of flow directly on the voxels showed the differences in the evolution of complex flow fields with changes in dissolution regime. The PDFs of normalized velocity for uniform dissolution showed that the maximum pore velocity within the system decreased as dissolution occurred. This is due to dissolution enlarging pores and pore throats. However, in the wormholing regime, there was a large increase in maximum velocity once the wormhole broke through the length of the core and a preferential flow path was created. Additionally, this study serves as a unique benchmark for pore-scale reactive transport modelling directly on the binarized Micro-CT images. This dynamic pore-scale imaging method offers advantages in helping fully explain the dominant physical and chemical processes at the pore scale so that they may be up-scaled to the reservoir scale for increased accuracy in model prediction.
Maximizing noise energy for noise-masking studies.
Jules Étienne, Cédric; Arleo, Angelo; Allard, Rémy
2017-08-01
Noise-masking experiments are widely used to investigate visual functions. To be useful, noise generally needs to be strong enough to noticeably impair performance, but under some conditions, noise does not impair performance even when its contrast approaches the maximal displayable limit of 100 %. To extend the usefulness of noise-masking paradigms over a wider range of conditions, the present study developed a noise with great masking strength. There are two typical ways of increasing masking strength without exceeding the limited contrast range: use binary noise instead of Gaussian noise or filter out frequencies that are not relevant to the task (i.e., which can be removed without affecting performance). The present study combined these two approaches to further increase masking strength. We show that binarizing the noise after the filtering process substantially increases the energy at frequencies within the pass-band of the filter given equated total contrast ranges. A validation experiment showed that similar performances were obtained using binarized-filtered noise and filtered noise (given equated noise energy at the frequencies within the pass-band) suggesting that the binarization operation, which substantially reduced the contrast range, had no significant impact on performance. We conclude that binarized-filtered noise (and more generally, truncated-filtered noise) can substantially increase the energy of the noise at frequencies within the pass-band. Thus, given a limited contrast range, binarized-filtered noise can display higher energy levels than Gaussian noise and thereby widen the range of conditions over which noise-masking paradigms can be useful.
A statistically defined anthropomorphic software breast phantom.
Lau, Beverly A; Reiser, Ingrid; Nishikawa, Robert M; Bakic, Predrag R
2012-06-01
Digital anthropomorphic breast phantoms have emerged in the past decade because of recent advances in 3D breast x-ray imaging techniques. Computer phantoms in the literature have incorporated power-law noise to represent glandular tissue and branching structures to represent linear components such as ducts. When power-law noise is added to those phantoms in one piece, the simulated fibroglandular tissue is distributed randomly throughout the breast, resulting in dense tissue placement that may not be observed in a real breast. The authors describe a method for enhancing an existing digital anthropomorphic breast phantom by adding binarized power-law noise to a limited area of the breast. Phantoms with (0.5 mm)(3) voxel size were generated using software developed by Bakic et al. Between 0% and 40% of adipose compartments in each phantom were replaced with binarized power-law noise (β = 3.0) ranging from 0.1 to 0.6 volumetric glandular fraction. The phantoms were compressed to 7.5 cm thickness, then blurred using a 3 × 3 boxcar kernel and up-sampled to (0.1 mm)(3) voxel size using trilinear interpolation. Following interpolation, the phantoms were adjusted for volumetric glandular fraction using global thresholding. Monoenergetic phantom projections were created, including quantum noise and simulated detector blur. Texture was quantified in the simulated projections using power-spectrum analysis to estimate the power-law exponent β from 25.6 × 25.6 mm(2) regions of interest. Phantoms were generated with total volumetric glandular fraction ranging from 3% to 24%. Values for β (averaged per projection view) were found to be between 2.67 and 3.73. Thus, the range of textures of the simulated breasts covers the textures observed in clinical images. Using these new techniques, digital anthropomorphic breast phantoms can be generated with a variety of glandular fractions and patterns. β values for this new phantom are comparable with published values for breast tissue in x-ray projection modalities. The combination of conspicuous linear structures and binarized power-law noise added to a limited area of the phantom qualitatively improves its realism. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng
2010-08-01
In recent years, Augmented Reality (AR)[1][2][3] is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.
Color image generation for screen-scanning holographic display.
Takaki, Yasuhiro; Matsumoto, Yuji; Nakajima, Tatsumi
2015-10-19
Horizontally scanning holography using a microelectromechanical system spatial light modulator (MEMS-SLM) can provide reconstructed images with an enlarged screen size and an increased viewing zone angle. Herein, we propose techniques to enable color image generation for a screen-scanning display system employing a single MEMS-SLM. Higher-order diffraction components generated by the MEMS-SLM for R, G, and B laser lights were coupled by providing proper illumination angles on the MEMS-SLM for each color. An error diffusion technique to binarize the hologram patterns was developed, in which the error diffusion directions were determined for each color. Color reconstructed images with a screen size of 6.2 in. and a viewing zone angle of 10.2° were generated at a frame rate of 30 Hz.
Personal authentication using hand vein triangulation and knuckle shape.
Kumar, Ajay; Prathyusha, K Venkata
2009-09-01
This paper presents a new approach to authenticate individuals using triangulation of hand vein images and simultaneous extraction of knuckle shape information. The proposed method is fully automated and employs palm dorsal hand vein images acquired from the low-cost, near infrared, contactless imaging. The knuckle tips are used as key points for the image normalization and extraction of region of interest. The matching scores are generated in two parallel stages: (i) hierarchical matching score from the four topologies of triangulation in the binarized vein structures and (ii) from the geometrical features consisting of knuckle point perimeter distances in the acquired images. The weighted score level combination from these two matching scores are used to authenticate the individuals. The achieved experimental results from the proposed system using contactless palm dorsal-hand vein images are promising (equal error rate of 1.14%) and suggest more user friendly alternative for user identification.
Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm
NASA Astrophysics Data System (ADS)
Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.
2018-05-01
A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.
Automatic detection of typical dust devils from Mars landscape images
NASA Astrophysics Data System (ADS)
Ogohara, Kazunori; Watanabe, Takeru; Okumura, Susumu; Hatanaka, Yuji
2018-02-01
This paper presents an improved algorithm for automatic detection of Martian dust devils that successfully extracts tiny bright dust devils and obscured large dust devils from two subtracted landscape images. These dust devils are frequently observed using visible cameras onboard landers or rovers. Nevertheless, previous research on automated detection of dust devils has not focused on these common types of dust devils, but on dust devils that appear on images to be irregularly bright and large. In this study, we detect these common dust devils automatically using two kinds of parameter sets for thresholding when binarizing subtracted images. We automatically extract dust devils from 266 images taken by the Spirit rover to evaluate our algorithm. Taking dust devils detected by visual inspection to be ground truth, the precision, recall and F-measure values are 0.77, 0.86, and 0.81, respectively.
Fast Fourier single-pixel imaging via binary illumination.
Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang
2017-09-20
Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.
Statistical Inference for Porous Materials using Persistent Homology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moon, Chul; Heath, Jason E.; Mitchell, Scott A.
2017-12-01
We propose a porous materials analysis pipeline using persistent homology. We rst compute persistent homology of binarized 3D images of sampled material subvolumes. For each image we compute sets of homology intervals, which are represented as summary graphics called persistence diagrams. We convert persistence diagrams into image vectors in order to analyze the similarity of the homology of the material images using the mature tools for image analysis. Each image is treated as a vector and we compute its principal components to extract features. We t a statistical model using the loadings of principal components to estimate material porosity, permeability,more » anisotropy, and tortuosity. We also propose an adaptive version of the structural similarity index (SSIM), a similarity metric for images, as a measure to determine the statistical representative elementary volumes (sREV) for persistence homology. Thus we provide a capability for making a statistical inference of the uid ow and transport properties of porous materials based on their geometry and connectivity.« less
Image processing and recognition for biological images
Uchida, Seiichi
2013-01-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739
Cross-indexing of binary SIFT codes for large-scale image search.
Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi
2014-05-01
In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.
Image-based mobile service: automatic text extraction and translation
NASA Astrophysics Data System (ADS)
Berclaz, Jérôme; Bhatti, Nina; Simske, Steven J.; Schettino, John C.
2010-01-01
We present a new mobile service for the translation of text from images taken by consumer-grade cell-phone cameras. Such capability represents a new paradigm for users where a simple image provides the basis for a service. The ubiquity and ease of use of cell-phone cameras enables acquisition and transmission of images anywhere and at any time a user wishes, delivering rapid and accurate translation over the phone's MMS and SMS facilities. Target text is extracted completely automatically, requiring no bounding box delineation or related user intervention. The service uses localization, binarization, text deskewing, and optical character recognition (OCR) in its analysis. Once the text is translated, an SMS message is sent to the user with the result. Further novelties include that no software installation is required on the handset, any service provider or camera phone can be used, and the entire service is implemented on the server side.
A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.
2017-12-01
This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Binary Sources and Binary Lenses in Microlensing Surveys of MACHOs
NASA Astrophysics Data System (ADS)
Petrovic, N.; Di Stefano, R.; Perna, R.
2003-12-01
Microlensing is an intriguing phenomenon which may yield information about the nature of dark matter. Early observational searches identified hundreds of microlensing light curves. The data set consisted mainly of point-lens light curves and binary-lens events in which the light curves exhibit caustic crossings. Very few mildly perturbed light curves were observed, although this latter type should constitute the majority of binary lens light curves. Di Stefano (2001) has suggested that the failure to take binary effects into account may have influenced the estimates of optical depth derived from microlensing surveys. The work we report on here is the first step in a systematic analysis of binary lenses and binary sources and their impact on the results of statistical microlensing surveys. In order to asses the problem, we ran Monte-Carlo simulations of various microlensing events involving binary stars (both as the source and as the lens). For each event with peak magnification > 1.34, we sampled the characteristic light curve and recorded the chi squared value when fitting the curve with a point lens model; we used this to asses the perturbation rate. We also recorded the parameters of each system, the maximum magnification, the times at which each light curve started and ended and the number of caustic crossings. We found that both the binarity of sources and the binarity of lenses increased the lensing rate. While the binarity of sources had a negligible effect on the perturbation rates of the light curves, the binarity of lenses had a notable effect. The combination of binary sources and binary lenses produces an observable rate of interesting events exhibiting multiple "repeats" in which the magnification rises above and dips below 1.34 several times. Finally, the binarity of lenses impacted both the durations of the events and the maximum magnifications. This work was supported in part by the SAO intern program (NSF grant AST-9731923) and NASA contracts NAS8-39073 and NAS8-38248 (CXC).
NASA Astrophysics Data System (ADS)
Menke, H. P.; Bijeljic, B.; Andrew, M. G.; Blunt, M. J.
2014-12-01
Sequestering carbon in deep geologic formations is one way of reducing anthropogenic CO2 emissions. When supercritical CO2 mixes with brine in a reservoir, the acid generated has the potential to dissolve the surrounding pore structure. However, the magnitude and type of dissolution are condition dependent. Understanding how small changes in the pore structure, chemistry, and flow properties affect dissolution is paramount for successful predictive modelling. Both 'Pink Beam' synchrotron radiation and a Micro-CT lab source are used in dynamic X-ray microtomography to investigate the pore structure changes during supercritical CO2 injection in carbonate rocks of varying heterogeneity at high temperatures and pressures and various flow-rates. Three carbonate rock types were studied, one with a homogeneous pore structure and two heterogeneous carbonates. All samples are practically pure calcium carbonate, but have widely varying rock structures. Flow-rate was varied in three successive experiments by over an order of magnitude whlie keeping all other experimental conditions constant. A 4-mm carbonate core was injected with CO2-saturated brine at 10 MPa and 50oC. Tomographic images were taken at 30-second to 20-minute time-resolutions during a 2 to 4-hour injection period. A pore network was extracted using a topological analysis of the pore space and pore-scale flow modelling was performed directly on the binarized images with connected pathways and used to track the altering velocity distributions. Significant differences in dissolution type and magnitude were found for each rock type and flowrate. At the highest flow-rates, the homogeneous carbonate was seen to have predominately uniform dissolution with minor dissolution rate differences between the pores and pore throats. Alternatively, the heterogeneous carbonates which formed wormholes at high flow rates. At low flow rates the homogeneous rock developed wormholes, while the heterogeneous samples showed evidence of compact dissolution. This study serves as a unique benchmark for pore-scale reactive transport modelling directly on the binarized Micro-CT images. Dynamic pore-scale imaging methods offer advantages in helping explain the dominant processes at the pore scale so that they may be up-scaled for accurate model prediction.
NASA Astrophysics Data System (ADS)
Xu, Jing; Wu, Jian; Feng, Daming; Cui, Zhiming
Serious types of vascular diseases such as carotid stenosis, aneurysm and vascular malformation may lead to brain stroke, which are the third leading cause of death and the number one cause of disability. In the clinical practice of diagnosis and treatment of cerebral vascular diseases, how to do effective detection and description of the vascular structure of two-dimensional angiography sequence image that is blood vessel skeleton extraction has been a difficult study for a long time. This paper mainly discussed two-dimensional image of blood vessel skeleton extraction based on the level set method, first do the preprocessing to the DSA image, namely uses anti-concentration diffusion model for the effective enhancement and uses improved Otsu local threshold segmentation technology based on regional division for the image binarization, then vascular skeleton extraction based on GMM (Group marching method) with fast sweeping theory was actualized. Experiments show that our approach not only improved the time complexity, but also make a good extraction results.
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
Iterative cross section sequence graph for handwritten character segmentation.
Dawoud, Amer
2007-08-01
The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms.
Image processing and recognition for biological images.
Uchida, Seiichi
2013-05-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.
Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.
del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano
2015-06-17
Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.
A new method to obtain ground control points based on SRTM data
NASA Astrophysics Data System (ADS)
Wang, Pu; An, Wei; Deng, Xin-pu; Zhang, Xi
2013-09-01
The GCPs are widely used in remote sense image registration and geometric correction. Normally, the DRG and DOM are the major data source from which GCPs are extracted. But the high accuracy products of DRG and DOM are usually costly to obtain. Some of the production are free, yet without any guarantee. In order to balance the cost and the accuracy, the paper proposes a method of extracting the GCPs from SRTM data. The method consist of artificial assistance, binarization, data resample and reshape. With artificial assistance to find out which part of SRTM data could be used as GCPs, such as the islands or sharp coast line. By utilizing binarization algorithm , the shape information of the region is obtained while other information is excluded. Then the binary data is resampled to a suitable resolution required by specific application. At last, the data would be reshaped according to satellite imaging type to obtain the GCPs which could be used. There are three advantages of the method proposed in the paper. Firstly, the method is easy for implementation. Unlike the DRG data or DOM data that charges a lot, the SRTM data is totally free to access without any constricts. Secondly, the SRTM has a high accuracy about 90m that is promised by its producer, so the GCPs got from it can also obtain a high quality. Finally, given the SRTM data covers nearly all the land surface of earth between latitude -60° and latitude +60°, the GCPs which are produced by the method can cover most important regions of the world. The method which obtain GCPs from SRTM data can be used in meteorological satellite image or some situation alike, which have a relative low requirement about the accuracy. Through plenty of simulation test, the method is proved convenient and effective.
Diagnosis of skin cancer using image processing
NASA Astrophysics Data System (ADS)
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel
2014-10-01
In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.
An improved algorithm of laser spot center detection in strong noise background
NASA Astrophysics Data System (ADS)
Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong
2018-01-01
Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.
Study of vegetation cover distribution using DVI, PVI, WDVI indices with 2D-space plot
NASA Astrophysics Data System (ADS)
Naji, Taghreed A. H.
2018-05-01
The present work aims to study the effect of using vegetation indices technique on image segmentation for subdividing an image into the homogeneous regions. Three of these vegetation indices technique has been adopted (i.e. Difference Vegetation-Index (DVI), Perpendicular Vegetation Index (PVI) and Weighted Difference Vegetation Index (WDVI)) for detecting and monitoring vegetation distribution and healthiness. Image binarization method being followed the implementation of the indices to isolating the vegetation areas from the image background. The separated agriculture regions from other land use regions and their percentages are presented for two years (2001 and 2002) of the (ETM+) scenes. The counted areas resulted from 2D-space plot technique and the separated vegetated areas resulted from the using of the vegetation indices are also presented. The separated agriculture regions from the implementation of the DVI-index have proved better than other used indices. Because it showed better coincident approximately with 2D-space plot segmentation.
Handwritten character recognition using background analysis
NASA Astrophysics Data System (ADS)
Tascini, Guido; Puliti, Paolo; Zingaretti, Primo
1993-04-01
The paper describes a low-cost handwritten character recognizer. It is constituted by three modules: the `acquisition' module, the `binarization' module, and the `core' module. The core module can be logically partitioned into six steps: character dilation, character circumscription, region and `profile' analysis, `cut' analysis, decision tree descent, and result validation. Firstly, it reduces the resolution of the binarized regions and detects the minimum rectangle (MR) which encloses the character; the MR partitions the background into regions that surround the character or are enclosed by it, and allows it to define features as `profiles' and `cuts;' a `profile' is the set of vertical or horizontal minimum distances between a side of the MR and the character itself; a `cut' is a vertical or horizontal image segment delimited by the MR. Then, the core module classifies the character by descending along the decision tree on the basis of the analysis of regions around the character, in particular of the `profiles' and `cuts,' and without using context information. Finally, it recognizes the character or reactivates the core module by analyzing validation test results. The recognizer is largely insensible to character discontinuity and is able to detect Arabic numerals and English alphabet capital letters. The recognition rate of a 32 X 32 pixel character is of about 97% after the first iteration, and of over 98% after the second iteration.
Pattern recognition invariant under changes of scale and orientation
NASA Astrophysics Data System (ADS)
Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain
1997-08-01
We have used a modified method proposed by neiberg and Casasent to successfully classify five kinds of military vehicles. The method uses a wedge filter to achieve scale invariance, and lines in a multi-dimensional feature space correspond to each target with out-of-plane orientations over 360 degrees around a vertical axis. The images were not binarized, but were filtered in a preprocessing step to reduce aliasing. The feature vectors were normalized and orthogonalized by means of a neural network. Out-of-plane rotations of 360 degrees and scale changes of a factor of four were considered. Error-free classification was achieved.
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images
Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu
2013-01-01
With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856
Shi, Haiyun; Gao, Chao; Dong, Changming; Xia, Changshui; Xu, Guanglai
2017-01-01
River islands are sandbars formed by scouring and silting. Their evolution is affected by several factors, among which are runoff and sediment discharge. The spatial-temporal evolution of seven river islands in the Nanjing Section of the Yangtze River of China was examined using TM (Thematic Mapper) and ETM (Enhanced Thematic Mapper)+ images from 1985 to 2015 at five year intervals. The following approaches were applied in this study: the threshold value method, binarization model, image registration, image cropping, convolution and cluster analysis. Annual runoff and sediment discharge data as measured at the Datong hydrological station upstream of Nanjing section were also used to determine the roles and impacts of various factors. The results indicated that: (1) TM/ETM+ images met the criteria of information extraction of river islands; (2) generally, the total area of these islands in this section and their changing rate decreased over time; (3) sediment and river discharge were the most significant factors in island evolution. They directly affect river islands through silting or erosion. Additionally, anthropocentric influences could play increasingly important roles. PMID:28953218
Freezing effect on bread appearance evaluated by digital imaging
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.
1999-01-01
In marketing channels, bread is sometimes delivered in a frozen sate for distribution. Changes occur in physical dimensions, crumb grain and appearance of slices. Ten loaves, twelve bread slices per loaf were scanned for digital image analysis and then frozen in a commercial refrigerator. The bread slices were stored for four weeks scanned again, permitted to thaw and scanned a third time. Image features were extracted, to determine shape, size and image texture of the slices. Different thresholds of grey levels were set to detect changes that occurred in crumb, images were binarized at these settings. The number of pixels falling into these gray level settings were determined for each slice. Image texture features of subimages of each slice were calculated to quantify slice crumb grain. The image features of the slice size showed shrinking of bread slices, as a results of freezing and storage, although shape of slices did not change markedly. Visible crumb texture changes occurred and these changes were depicted by changes in image texture features. Image texture features showed that slice crumb changed differently at the center of a slice compared to a peripheral area close to the crust. Image texture and slice features were sufficient for discrimination of slices before and after freezing and after thawing.
NASA Astrophysics Data System (ADS)
Sousa, Maria A. Z.; Bakic, Predrag R.; Schiabel, Homero; Maidment, Andrew D. A.
2017-03-01
Digital breast tomosynthesis (DBT) has been shown to be an effective imaging tool for breast cancer diagnosis as it provides three-dimensional images of the breast with minimal tissue overlap. The quality of the reconstructed image depends on many factors that can be assessed using uniform or realistic phantoms. In this paper, we created four models of phantoms using an anthropomorphic software breast phantom and compared four methods to evaluate the gray scale response in terms of the contrast, noise and detectability of adipose and glandular tissues binarized according to phantom ground truth. For each method, circular regions of interest (ROIs) were selected with various sizes, quantity and positions inside a square area in the phantom. We also estimated the percent density of the simulated breast and the capability of distinguishing both tissues by receiver operating characteristic (ROC) analysis. Results shows a sensitivity of the methods to the ROI size, placement and to the slices considered.
Rock fracture skeleton tracing by image processing and quantitative analysis by geometry features
NASA Astrophysics Data System (ADS)
Liang, Yanjie
2016-06-01
In rock engineering, fracture measurement is important for many applications. This paper proposes a novel method for rock fracture skeleton tracing and analyzing. As for skeleton localizing, the curvilinear fractures are multiscale enhanced based on a Hessian matrix, after image binarization, and clutters are post-processed by image analysis; subsequently, the fracture skeleton is extracted via ridge detection combined with a distance transform and thinning algorithm, after which gap sewing and burrs removal repair the skeleton. In regard to skeleton analyzing, the roughness and distribution of a fracture network are respectively described by the fractal dimensions D s and D b; the intersection and fragmentation of a fracture network are respectively characterized by the average number of ends and junctions per fracture N average and the average length per fracture L average. Three rock fracture surfaces are analyzed for experiments and the results verify that both the fracture tracing accuracy and the analysis feasibility are satisfactory using the new method.
Binarization of apodizers by adapted one-dimensional error diffusion method
NASA Astrophysics Data System (ADS)
Kowalczyk, Marek; Cichocki, Tomasz; Martinez-Corral, Manuel; Andres, Pedro
1994-10-01
Two novel algorithms for the binarization of continuous rotationally symmetric real positive pupil filters are presented. Both algorithms are based on 1-D error diffusion concept. The original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the pupils with equal width zones give Fraunhofer diffraction pattern more similar to that of the original continuous-tone pupil than those with equal area zones, assuming in both cases the same resolution limit of printing device.
Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.
Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han
2017-09-07
Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.
A human visual based binarization technique for histological images
NASA Astrophysics Data System (ADS)
Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos
2017-05-01
In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Liping; Zhu, Fulong, E-mail: zhufulong@hust.edu.cn; Duan, Ke
Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of opticalmore » devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.« less
Ultrasonic power measurement system based on acousto-optic interaction.
He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan
2016-05-01
Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.
Ultrasonic power measurement system based on acousto-optic interaction
NASA Astrophysics Data System (ADS)
He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan
2016-05-01
Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.
Vision-based surface defect inspection for thick steel plates
NASA Astrophysics Data System (ADS)
Yun, Jong Pil; Kim, Dongseob; Kim, KyuHwan; Lee, Sang Jun; Park, Chang Hyun; Kim, Sang Woo
2017-05-01
There are several types of steel products, such as wire rods, cold-rolled coils, hot-rolled coils, thick plates, and electrical sheets. Surface stains on cold-rolled coils are considered defects. However, surface stains on thick plates are not considered defects. A conventional optical structure is composed of a camera and lighting module. A defect inspection system that uses a dual lighting structure to distinguish uneven defects and color changes by surface noise is proposed. In addition, an image processing algorithm that can be used to detect defects is presented in this paper. The algorithm consists of a Gabor filter that detects the switching pattern and employs the binarization method to extract the shape of the defect. The optics module and detection algorithm optimized using a simulator were installed at a real plant, and the experimental results conducted on thick steel plate images obtained from the steel production line show the effectiveness of the proposed method.
Bubble structure evaluation method of sponge cake by using image morphology
NASA Astrophysics Data System (ADS)
Kato, Kunihito; Yamamoto, Kazuhiko; Nonaka, Masahiko; Katsuta, Yukiyo; Kasamatsu, Chinatsu
2007-01-01
Nowadays, many evaluation methods for food industry by using image processing are proposed. These methods are becoming new evaluation method besides the sensory test and the solid-state measurement that have been used for the quality evaluation recently. The goal of our research is structure evaluation of sponge cake by using the image processing. In this paper, we propose a feature extraction method of the bobble structure in the sponge cake. Analysis of the bubble structure is one of the important properties to understand characteristics of the cake from the image. In order to take the cake image, first we cut cakes and measured that's surface by using the CIS scanner, because the depth of field of this type scanner is very shallow. Therefore the bubble region of the surface has low gray scale value, and it has a feature that is blur. We extracted bubble regions from the surface images based on these features. The input image is binarized, and the feature of bubble is extracted by the morphology analysis. In order to evaluate the result of feature extraction, we compared correlation with "Size of the bubble" of the sensory test result. From a result, the bubble extraction by using morphology analysis gives good correlation. It is shown that our method is as well as the subjectivity evaluation.
Measuring the Number of M Dwarfs per M Dwarf Using Kepler Eclipsing Binaries
NASA Astrophysics Data System (ADS)
Shan, Yutong; Johnson, John A.; Morton, Timothy D.
2015-11-01
We measure the binarity of detached M dwarfs in the Kepler field with orbital periods in the range of 1-90 days. Kepler’s photometric precision and nearly continuous monitoring of stellar targets over time baselines ranging from 3 months to 4 years make its detection efficiency for eclipsing binaries nearly complete over this period range and for all radius ratios. Our investigation employs a statistical framework akin to that used for inferring planetary occurrence rates from planetary transits. The obvious simplification is that eclipsing binaries have a vastly improved detection efficiency that is limited chiefly by their geometric probabilities to eclipse. For the M-dwarf sample observed by the Kepler Mission, the fractional incidence of eclipsing binaries implies that there are {0.11}-0.04+0.02 close stellar companions per apparently single M dwarf. Our measured binarity is higher than previous inferences of the occurrence rate of close binaries via radial velocity techniques, at roughly the 2σ level. This study represents the first use of eclipsing binary detections from a high quality transiting planet mission to infer binary statistics. Application of this statistical framework to the eclipsing binaries discovered by future transit surveys will establish better constraints on short-period M+M binary rate, as well as binarity measurements for stars of other spectral types.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-06-08
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.
Image Processing of Porous Silicon Microarray in Refractive Index Change Detection
Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi
2017-01-01
A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing
Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu
2017-01-01
Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%. PMID:28880254
NASA Astrophysics Data System (ADS)
Kowalczyk, Marek; Martínez-Corral, Manuel; Cichocki, Tomasz; Andrés, Pedro
1995-02-01
Two novel algorithms for the binarization of continuous rotationally symmetric real and positive pupil filters are presented. Both algorithms are based on the one-dimensional error diffusion concept. In our numerical experiment an original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the filter with equal width zones gives Fraunhofer diffraction pattern more similar to that of the original gray-tone apodizer than that with equal area zones, assuming in both cases the same resolution limit of device used to print both filters.
Finger vein extraction using gradient normalization and principal curvature
NASA Astrophysics Data System (ADS)
Choi, Joon Hwan; Song, Wonseok; Kim, Taejeong; Lee, Seung-Rae; Kim, Hee Chan
2009-02-01
Finger vein authentication is a personal identification technology using finger vein images acquired by infrared imaging. It is one of the newest technologies in biometrics. Its main advantage over other biometrics is the low risk of forgery or theft, due to the fact that finger veins are not normally visible to others. Extracting finger vein patterns from infrared images is the most difficult part in finger vein authentication. Uneven illumination, varying tissues and bones, and changes in the physical conditions and the blood flow make the thickness and brightness of the same vein different in each acquisition. Accordingly, extracting finger veins at their accurate positions regardless of their thickness and brightness is necessary for accurate personal identification. For this purpose, we propose a new finger vein extraction method which is composed of gradient normalization, principal curvature calculation, and binarization. As local brightness variation has little effect on the curvature and as gradient normalization makes the curvature fairly uniform at vein pixels, our method effectively extracts finger vein patterns regardless of the vein thickness or brightness. In our experiment, the proposed method showed notable improvement as compared with the existing methods.
Stellar Companions of Exoplanet Host Stars in K2
NASA Astrophysics Data System (ADS)
Matson, Rachel; Howell, Steve; Horch, Elliott; Everett, Mark
2018-01-01
Stellar multiplicity has significant implications for the detection and characterization of exoplanets. A stellar companion can mimic the signal of a transiting planet or distort the true planetary radii, leading to improper density estimates and over-predicting the occurrence rates of Earth-sized planets. Determining the fraction of exoplanet host stars that are also binaries allows us to better determine planetary characteristics as well as establish the relationship between binarity and planet formation. Using high-resolution speckle imaging to obtain diffraction limited images of K2 planet candidate host stars we detect stellar companions within one arcsec and up to six magnitudes fainter than the host star. By comparing our observed companion fraction to TRILEGAL star count simulations, and using the known detection limits of speckle imaging, we find the binary fraction of K2 planet host stars to be similar to that of Kepler host stars and solar-type field stars. Accounting for stellar companions in exoplanet studies is therefore essential for deriving true stellar and planetary properties as well as maximizing the returns for TESS and future exoplanet missions.
NASA Astrophysics Data System (ADS)
Gu, J.; Yang, H.; Fan, F.; Su, M.
2017-12-01
A transmission and reflection coupled ultrasonic process tomography has been developed, which is characterized by a proposed dual-mode (DM) reconstruction algorithm, as well as an adaptive search approach to determine an optimal image threshold during the image binarization. In respect of hardware, to improve the accuracy of time-of-flight (TOF) and extend the lowest detection limit of particle size, a cylindrical miniaturized transducer using polyvinylidene fluoride (PVDF) films is designed. Besides, the development of range-gating technique for the identification of transmission and reflection waves in scanning is discussed. A particle system with four iron particles is then investigated numerically and experimentally to evaluate these proposed methods. The sound pressure distribution in imaging area is predicted numerically, followed by the analysis of the relationship between the emitting surface width of transducer and particle size. After the processing of experimental data for effective waveform extraction and fusion, the comparison between reconstructed results from transmission-mode (TM), reflection-mode (RM), and dual-mode reconstructions is carried out and the latter manifests obvious improvements from the blurring reduction to the enhancement of particle boundary.
Chinese character recognition based on Gabor feature extraction and CNN
NASA Astrophysics Data System (ADS)
Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan
2018-03-01
As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.
Digital audio watermarking using moment-preserving thresholding
NASA Astrophysics Data System (ADS)
Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong
2007-09-01
The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.
NASA Astrophysics Data System (ADS)
Tatebe, Hironobu; Kato, Kunihito; Yamamoto, Kazuhiko; Katsuta, Yukio; Nonaka, Masahiko
2005-12-01
Now a day, many evaluation methods for the food industry by using image processing are proposed. These methods are becoming new evaluation method besides the sensory test and the solid-state measurement that are using for the quality evaluation. An advantage of the image processing is to be able to evaluate objectively. The goal of our research is structure evaluation of sponge cake by using image processing. In this paper, we propose a feature extraction method of the bobble structure in the sponge cake. Analysis of the bubble structure is one of the important properties to understand characteristics of the cake from the image. In order to take the cake image, first we cut cakes and measured that's surface by using the CIS scanner. Because the depth of field of this type scanner is very shallow, the bubble region of the surface has low gray scale values, and it has a feature that is blur. We extracted bubble regions from the surface images based on these features. First, input image is binarized, and the feature of bubble is extracted by the morphology analysis. In order to evaluate the result of feature extraction, we compared correlation with "Size of the bubble" of the sensory test result. From a result, the bubble extraction by using morphology analysis gives good correlation. It is shown that our method is as well as the subjectivity evaluation.
Real-time distortion correction for visual inspection systems based on FPGA
NASA Astrophysics Data System (ADS)
Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin
2008-03-01
Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.
Segmentation of prostate biopsy needles in transrectal ultrasound images
NASA Astrophysics Data System (ADS)
Krefting, Dagmar; Haupt, Barbara; Tolxdorff, Thomas; Kempkensteffen, Carsten; Miller, Kurt
2007-03-01
Prostate cancer is the most common cancer in men. Tissue extraction at different locations (biopsy) is the gold-standard for diagnosis of prostate cancer. These biopsies are commonly guided by transrectal ultrasound imaging (TRUS). Exact location of the extracted tissue within the gland is desired for more specific diagnosis and provides better therapy planning. While the orientation and the position of the needle within clinical TRUS image are limited, the appearing length and visibility of the needle varies strongly. Marker lines are present and tissue inhomogeneities and deflection artefacts may appear. Simple intensity, gradient oder edge-detecting based segmentation methods fail. Therefore a multivariate statistical classificator is implemented. The independent feature model is built by supervised learning using a set of manually segmented needles. The feature space is spanned by common binary object features as size and eccentricity as well as imaging-system dependent features like distance and orientation relative to the marker line. The object extraction is done by multi-step binarization of the region of interest. The ROI is automatically determined at the beginning of the segmentation and marker lines are removed from the images. The segmentation itself is realized by scale-invariant classification using maximum likelihood estimation and Mahalanobis distance as discriminator. The technique presented here could be successfully applied in 94% of 1835 TRUS images from 30 tissue extractions. It provides a robust method for biopsy needle localization in clinical prostate biopsy TRUS images.
Design and control of active vision based mechanisms for intelligent robots
NASA Technical Reports Server (NTRS)
Wu, Liwei; Marefat, Michael M.
1994-01-01
In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.
Spectral and Photometric Data of Be Star, EM Cep
NASA Astrophysics Data System (ADS)
Kochiashvili, Nino; Natsvilishvili, Rezo; Kochiashvili, Ia; Vardosanidze, Manana; Beradze, Sopia; Pannicke, Anna
The subject of investigation in this project is a Be spectral type giant variable star EM Cep. It was established that the star has a double nature: 1. when emission lines are seen in its spectrum and 2. when only absorption lines are observable and emission lines are not seen. This means that the star is not always in Be state. Be state continues existing during a few months. EM Cep shows flare activity too. The causes of photometric and spectral variability are to be established. The existence of different mechanisms, which provokes Be phenomenon, is possible. The character of light curves' variability gives us possibility to propose that it is not excluded that the star could be a short-period Cepheid of λ Eri type. However, we do not have sufficient data to exclude its binarity. On the basis of the observations carried out at Abastumani observatory, the light curve with two minima and two maxima were revealed, but these data, too accord with the half-period - we can also consider a light curve with one minimum and one maximum. Both cases suggest a good agreement with the characters of variability. For the case of binarity in Abastumani observatory, a set of orbital elements by using the Wilson-Devinney code is already obtained. The elements correspond to the model of acceptable, real close binary star. However, notwithstanding this situation, the true nature of the star is not established for the moment. To solve this problem, we need to get high-resolution spectral data, when by using radial velocity curves, it would be possible to answer the question of binarity of the star. It is not excluded to reveal spectral lines of the second component in case of binarity of the star. Since 2014, we have renewed UBVRI photometric observations of EM Cep in Abastumani using a 48-cm telescope with CCD device. Spectral observations are made in Azerbaijan, Shamakhy Observatory. Our German Colleagues have been observing the star since March of 2017 at the Observatory of the Jena University. We plan to carry out a joint analysis of the observations of the three observatories to explain the observational peculiarities of the star.
Three-dimensional imaging of porous media using confocal laser scanning microscopy.
Shah, S M; Crawshaw, J P; Boek, E S
2017-02-01
In the last decade, imaging techniques capable of reconstructing three-dimensional (3-D) pore-scale model have played a pivotal role in the study of fluid flow through complex porous media. In this study, we present advances in the application of confocal laser scanning microscopy (CLSM) to image, reconstruct and characterize complex porous geological materials with hydrocarbon reservoir and CO 2 storage potential. CLSM has a unique capability of producing 3-D thin optical sections of a material, with a wide field of view and submicron resolution in the lateral and axial planes. However, CLSM is limited in the depth (z-dimension) that can be imaged in porous materials. In this study, we introduce a 'grind and slice' technique to overcome this limitation. We discuss the practical and technical aspects of the confocal imaging technique with application to complex rock samples including Mt. Gambier and Ketton carbonates. We then describe the complete workflow of image processing to filtering and segmenting the raw 3-D confocal volumetric data into pores and grains. Finally, we use the resulting 3-D pore-scale binarized confocal data obtained to quantitatively determine petrophysical pore-scale properties such as total porosity, macro- and microporosity and single-phase permeability using lattice Boltzmann (LB) simulations, validated by experiments. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Choi, Woo June; Pepple, Kathryn L; Wang, Ruikang K
2018-05-24
In preclinical vision research, cell grading in small animal models is essential for the quantitative evaluation of intraocular inflammation. Here, we present a new and practical optical coherence tomography (OCT) image analysis method for the automated detection and counting of aqueous cells in the anterior chamber (AC) of a rodent model of uveitis. Anterior segment OCT (AS-OCT) images are acquired with a 100kHz swept-source OCT (SS-OCT) system. The proposed method consists of two steps. In the first step, we first despeckle and binarize each OCT image. After removing AS structures in the binary image, we then apply area thresholding to isolate cell-like objects. Potential cell candidates are selected based on their best fit to roundness. The second step performs the cell counting within the whole AC, in which additional cell tracking analysis is conducted on the successive OCT images to eliminate redundancy in cell counting. Finally, 3-D cell grading using the proposed method is demonstrated in longitudinal OCT imaging of a mouse model of anterior uveitis in vivo. Rendering of anterior segment (orange) of mouse eye and automatically counted anterior chamber cells (green). Inset is a top view of the rendering, showing the cell distribution across the anterior chamber. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Wang, Jin-Hui; Zuo, Xi-Nian; Gohel, Suril; Milham, Michael P.; Biswal, Bharat B.; He, Yong
2011-01-01
Graph-based computational network analysis has proven a powerful tool to quantitatively characterize functional architectures of the brain. However, the test-retest (TRT) reliability of graph metrics of functional networks has not been systematically examined. Here, we investigated TRT reliability of topological metrics of functional brain networks derived from resting-state functional magnetic resonance imaging data. Specifically, we evaluated both short-term (<1 hour apart) and long-term (>5 months apart) TRT reliability for 12 global and 6 local nodal network metrics. We found that reliability of global network metrics was overall low, threshold-sensitive and dependent on several factors of scanning time interval (TI, long-term>short-term), network membership (NM, networks excluding negative correlations>networks including negative correlations) and network type (NT, binarized networks>weighted networks). The dependence was modulated by another factor of node definition (ND) strategy. The local nodal reliability exhibited large variability across nodal metrics and a spatially heterogeneous distribution. Nodal degree was the most reliable metric and varied the least across the factors above. Hub regions in association and limbic/paralimbic cortices showed moderate TRT reliability. Importantly, nodal reliability was robust to above-mentioned four factors. Simulation analysis revealed that global network metrics were extremely sensitive (but varying degrees) to noise in functional connectivity and weighted networks generated numerically more reliable results in compared with binarized networks. For nodal network metrics, they showed high resistance to noise in functional connectivity and no NT related differences were found in the resistance. These findings provide important implications on how to choose reliable analytical schemes and network metrics of interest. PMID:21818285
Error diffusion concept for multi-level quantization
NASA Astrophysics Data System (ADS)
Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof
1990-11-01
The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.
NASA Astrophysics Data System (ADS)
Alvarez, M.; Hernandez, M. M.; Michel, E.; Jiang, S. Y.; Belmonte, J. A.; Chevreton, M.; Massacrier, G.; Liu, Y. Y.; Li, Z. P.; Goupil, M. J.; Cortes, T. Roca; Mangeney, A.; Dolez, N.; Valtier, J. C.; Vidal, I.; Sperl, M.; Talon, S.
1998-12-01
New pulsation modes in two delta Scuti stars of Praesepe cluster, BQ and BW Cnc, have been detected during the STEPHI VI campaign in 1995. In particular, 3 frequencies for BQ Cnc and 9 frequencies for BW Cnc have been found above a 99% confidence level. The possibility of the presence of a g-mode present in BQ Cnc is discussed, considering its binarity. The effect of mutual interference between very close detected frequencies in BW Cnc during the observations, is also considered. This last effect reveals the necessity of long-period observing runs, in order to avoid its influence in the final number of detected modes. In such situation, studies of secular amplitude changes can be strongly affected.
Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things
NASA Astrophysics Data System (ADS)
Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik
2017-09-01
This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.
NASA Astrophysics Data System (ADS)
Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki
2017-09-01
Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.
Su, Mingzhe; Ma, Yan; Zhang, Xiangfen; Wang, Yan; Zhang, Yuping
2017-01-01
The traditional scale invariant feature transform (SIFT) method can extract distinctive features for image matching. However, it is extremely time-consuming in SIFT matching because of the use of the Euclidean distance measure. Recently, many binary SIFT (BSIFT) methods have been developed to improve matching efficiency; however, none of them is invariant to mirror reflection. To address these problems, in this paper, we present a horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT, in addition to a novel image matching approach. First, 16 cells in the local region around the SIFT keypoint are reorganized, and then the 128-dimensional vector of the SIFT descriptor is transformed into a reconstructed vector according to eight directions. Finally, the MBR-SIFT descriptor is obtained after binarization and reverse coding. To improve the matching speed and accuracy, a fast matching algorithm that includes a coarse-to-fine two-step matching strategy in addition to two similarity measures for the MBR-SIFT descriptor are proposed. Experimental results on the UKBench dataset show that the proposed method not only solves the problem of mirror reflection, but also ensures desirable matching accuracy and speed.
Su, Mingzhe; Ma, Yan; Zhang, Xiangfen; Wang, Yan; Zhang, Yuping
2017-01-01
The traditional scale invariant feature transform (SIFT) method can extract distinctive features for image matching. However, it is extremely time-consuming in SIFT matching because of the use of the Euclidean distance measure. Recently, many binary SIFT (BSIFT) methods have been developed to improve matching efficiency; however, none of them is invariant to mirror reflection. To address these problems, in this paper, we present a horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT, in addition to a novel image matching approach. First, 16 cells in the local region around the SIFT keypoint are reorganized, and then the 128-dimensional vector of the SIFT descriptor is transformed into a reconstructed vector according to eight directions. Finally, the MBR-SIFT descriptor is obtained after binarization and reverse coding. To improve the matching speed and accuracy, a fast matching algorithm that includes a coarse-to-fine two-step matching strategy in addition to two similarity measures for the MBR-SIFT descriptor are proposed. Experimental results on the UKBench dataset show that the proposed method not only solves the problem of mirror reflection, but also ensures desirable matching accuracy and speed. PMID:28542537
Automatic detection of zebra crossings from mobile LiDAR data
NASA Astrophysics Data System (ADS)
Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.
2015-07-01
An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.
Reif, Roberto; Baran, Utku; Wang, Ruikang K
2014-07-01
Optical coherence tomography (OCT) is a technique that allows for the three-dimensional (3D) imaging of small volumes of tissue (a few millimeters) with high resolution (∼10 μm). Optical microangiography (OMAG) is a method of processing OCT data, which allows for the extraction of the tissue vasculature with capillary resolution from the OCT images. Cross-sectional B-frame OMAG images present the location of the patent blood vessels; however, the signal-to-noise-ratio of these images can be affected by several factors such as the quality of the OCT system and the tissue motion artifact. This background noise can appear in the en face projection view image. In this work we propose to develop a binary mask that can be applied on the cross-sectional B-frame OMAG images, which will reduce the background noise while leaving the signal from the blood vessels intact. The mask is created by using a naïve Bayes (NB) classification algorithm trained with a gold standard image which is manually segmented by an expert. The masked OMAG images present better contrast for binarizing the image and quantifying the result without the influence of noise. The results are compared with a previously developed frequency rejection filter (FRF) method which is applied on the en face projection view image. It is demonstrated that both the NB and FRF methods provide similar vessel length fractions. The advantage of the NB method is that the results are applicable in 3D and that its use is not limited to periodic motion artifacts.
Properties of LEGUS Clusters Obtained with Different Massive-Star Evolutionary Tracks
NASA Astrophysics Data System (ADS)
Wofford, A.; Charlot, S.; Eldridge, J. J.
We compute spectral libraries for populations of coeval stars using state-of-the-art massive-star evolutionary tracks that account for different astrophysics including rotation and close-binarity. Our synthetic spectra account for stellar and nebular contributions. We use our models to obtain E(B - V ), age, and mass for six clusters in spiral galaxy NGC 1566, which have ages of < 50 Myr and masses of > 5 x 104M⊙ according to standard models. NGC 1566 was observed from the NUV to the I-band as part of the imaging Treasury HST program LEGUS: Legacy Extragalactic UV Survey. We aim to establish i) if the models provide reasonable fits to the data, ii) how well the models and photometry are able to constrain the cluster properties, and iii) how different the properties obtained with different models are.
NASA Astrophysics Data System (ADS)
Larsen, J. D.; Schaap, M. G.
2013-12-01
Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.
The role of Fizeau interferometry in planetary science
NASA Astrophysics Data System (ADS)
Conrad, Albert R.
2016-08-01
Historically, two types of interferometer have been used to the study of solar system objects: coaxial and Fizeau. While coaxial interferometers are well-suited to a wide range of galactic and extra-galactic science cases, solar system science cases are, in most cases, better carried out with Fizeau imagers. Targets of interest in our solar system are often bright and compact, and the science cases for these objects often call for a complete, or nearly complete, image at high angular resolution. For both methods, multiple images must be taken at varying baselines to reconstruct an image. However, with the Fizeau technique that number is far fewer than it is for the aperture synthesis method employed by co-axial interferometers. In our solar system, bodies rotate and their surfaces are sometimes changing over yearly, or even weekly, time scales. Thus, the need to be able to exploit the high angular resolution of an interferometer with only a handful of observations taken on a single night, as is the case for Fizeau interferometers, gives a key advantage to this technique. The aperture of the Large Binocular Telescope (LBT), two 8.4 circular mirrors separated center-to-center by 14.4 meters, is optimal for supporting Fizeau interferometry. The first of two Fizeau imagers planned for LBT, the LBT Interferometer (LBTI),1 saw first fringes in 2010 and has proven to be a valuable tool for solar system studies. Recent studies of Jupiters volcanic moon Io have yielded results that rely on the angular resolution provided by the full 23-meter baseline of LBT Future studies of the aurora at Jupiters poles and the shape and binarity of asteroids are planned. While many solar system studies can be carried out on-axis (i.e., using the target of interest as the beacon for both adaptive optics correction and fringe tracking), studies such as Io-in-eclipse, full disk of Jupiter and Mars, and binarity of Kuiper belt objects, require off-axis observations (i.e., using one or more nearby guide-moons or stars for adaptive optics correction and fringe tracking). These studies can be plagued by anisoplanatism, or cone effect. LINC-NIRVANA (LN),2 the first multi-conjugate adaptive optics system (MCAO) on an 8-meter class telescope in the northern hemisphere, provides a solution to the ill-effects of anisoplanatism. One of the LN ground layer wave front sensors was tested on LBT during 2014.3-5 Longer term, an upgrade planned for LN will establish its original role as the second LBT Fizeau imager. The full-disk study of several solar system bodies, most notably large and/or nearby bodies such as Jupiter and Mars which span tens of arcseconds, would be best studied with LN. We will review the past accomplishments of Fizeau interferometry with LBTI, present plans for using that instrument for future solar system studies, and, lastly, explore the unique solar system studies that require the LN MCAO system combined with Fizeau interferometry.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Henry, Todd J.; Jao, Wei-Chun; Subasavage, John; Riedel, Adric; Winters, Jennifer
2010-02-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is primarily focused on targets where precise astrophysical information is sorely lacking: white dwarfs, red dwarfs, and subdwarfs. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Riedel's and Winters' theses.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Henry, Todd J.; Jao, Wei-Chun; Subasavage, John; Riedel, Adric; Winters, Jennifer
2009-08-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is primarily focused on targets where precise astrophysical information is sorely lacking: white dwarfs, red dwarfs, and subdwarfs. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Riedel's and Winters' theses.
Automatic detection of malaria parasite in blood images using two parameters.
Kim, Jong-Dae; Nam, Kyeong-Min; Park, Chan-Young; Kim, Yu-Seop; Song, Hye-Jeong
2015-01-01
Malaria must be diagnosed quickly and accurately at the initial infection stage and treated early to cure it properly. The malaria diagnosis method using a microscope requires much labor and time of a skilled expert and the diagnosis results vary greatly between individual diagnosticians. Therefore, to be able to measure the malaria parasite infection quickly and accurately, studies have been conducted for automated classification techniques using various parameters. In this study, by measuring classification technique performance according to changes of two parameters, the parameter values were determined that best distinguish normal from plasmodium-infected red blood cells. To reduce the stain deviation of the acquired images, a principal component analysis (PCA) grayscale conversion method was used, and as parameters, we used a malaria infected area and a threshold value used in binarization. The parameter values with the best classification performance were determined by selecting the value (72) corresponding to the lowest error rate on the basis of cell threshold value 128 for the malaria threshold value for detecting plasmodium-infected red blood cells.
NASA Astrophysics Data System (ADS)
Laib dit Leksir, Y.; Mansour, M.; Moussaoui, A.
2018-03-01
Analysis and processing of databases obtained from infrared thermal inspections made on electrical installations require the development of new tools to obtain more information to visual inspections. Consequently, methods based on the capture of thermal images show a great potential and are increasingly employed in this field. However, there is a need for the development of effective techniques to analyse these databases in order to extract significant information relating to the state of the infrastructures. This paper presents a technique explaining how this approach can be implemented and proposes a system that can help to detect faults in thermal images of electrical installations. The proposed method classifies and identifies the region of interest (ROI). The identification is conducted using support vector machine (SVM) algorithm. The aim here is to capture the faults that exist in electrical equipments during an inspection of some machines using A40 FLIR camera. After that, binarization techniques are employed to select the region of interest. Later the comparative analysis of the obtained misclassification errors using the proposed method with Fuzzy c means and Ostu, has also be addressed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokovinin, Andrei, E-mail: atokovinin@ctio.noao.edu
To improve the statistics of hierarchical multiplicity, secondary components of wide nearby binaries with solar-type primaries were surveyed at the SOAR telescope for evaluating the frequency of subsystems. Images of 17 faint secondaries were obtained with the SOAR Adaptive Module that improved the seeing; one new 0.''2 binary was detected. For all targets, photometry in the g', i', z' bands is given. Another 46 secondaries were observed by speckle interferometry, resolving 7 close subsystems. Adding literature data, the binarity of 95 secondary components is evaluated. We found that the detection-corrected frequency of secondary subsystems with periods in the well-surveyed rangemore » from 10{sup 3} to 10{sup 5} days is 0.21 ± 0.06—same as the normal frequency of such binaries among solar-type stars, 0.18. This indicates that wide binaries are unlikely to be produced by dynamical evolution of N-body systems, but are rather formed by fragmentation.« less
Finessing filter scarcity problem in face recognition via multi-fold filter convolution
NASA Astrophysics Data System (ADS)
Low, Cheng-Yaw; Teoh, Andrew Beng-Jin
2017-06-01
The deep convolutional neural networks for face recognition, from DeepFace to the recent FaceNet, demand a sufficiently large volume of filters for feature extraction, in addition to being deep. The shallow filter-bank approaches, e.g., principal component analysis network (PCANet), binarized statistical image features (BSIF), and other analogous variants, endure the filter scarcity problem that not all PCA and ICA filters available are discriminative to abstract noise-free features. This paper extends our previous work on multi-fold filter convolution (ℳ-FFC), where the pre-learned PCA and ICA filter sets are exponentially diversified by ℳ folds to instantiate PCA, ICA, and PCA-ICA offspring. The experimental results unveil that the 2-FFC operation solves the filter scarcity state. The 2-FFC descriptors are also evidenced to be superior to that of PCANet, BSIF, and other face descriptors, in terms of rank-1 identification rate (%).
Brain tumor classification of microscopy images using deep residual learning
NASA Astrophysics Data System (ADS)
Ishikawa, Yota; Washiya, Kiyotada; Aoki, Kota; Nagahashi, Hiroshi
2016-12-01
The crisis rate of brain tumor is about one point four in ten thousands. In general, cytotechnologists take charge of cytologic diagnosis. However, the number of cytotechnologists who can diagnose brain tumors is not sufficient, because of the necessity of highly specialized skill. Computer-Aided Diagnosis by computational image analysis may dissolve the shortage of experts and support objective pathological examinations. Our purpose is to support a diagnosis from a microscopy image of brain cortex and to identify brain tumor by medical image processing. In this study, we analyze Astrocytes that is a type of glia cell of central nerve system. It is not easy for an expert to discriminate brain tumor correctly since the difference between astrocytes and low grade astrocytoma (tumors formed from Astrocyte) is very slight. In this study, we present a novel method to segment cell regions robustly using BING objectness estimation and to classify brain tumors using deep convolutional neural networks (CNNs) constructed by deep residual learning. BING is a fast object detection method and we use pretrained BING model to detect brain cells. After that, we apply a sequence of post-processing like Voronoi diagram, binarization, watershed transform to obtain fine segmentation. For classification using CNNs, a usual way of data argumentation is applied to brain cells database. Experimental results showed 98.5% accuracy of classification and 98.2% accuracy of segmentation.
Larue, A E; Swider, P; Duru, P; Daviaud, D; Quintard, M; Davit, Y
2018-06-21
Optical imaging techniques for biofilm observation, like laser scanning microscopy, are not applicable when investigating biofilm formation in opaque porous media. X-ray micro-tomography (X-ray CMT) might be an alternative but it finds limitations in similarity of X-ray absorption coefficients for the biofilm and aqueous phases. To overcome this difficulty, barium sulphate was used in Davit et al. (2011) to enable high-resolution 3D imaging of biofilm via X-ray CMT. However, this approach lacks comparison with well-established imaging methods, which are known to capture the fine structures of biofilms, as well as uncertainty quantification. Here, we compare two-photon laser scanning microscopy (TPLSM) images of Pseudomonas Aeruginosa biofilm grown in glass capillaries against X-ray CMT using an improved protocol where barium sulphate is combined with low-gelling temperature agarose to avoid sedimentation. Calibrated phantoms consisting of mono-dispersed fluorescent and X-ray absorbent beads were used to evaluate the uncertainty associated with our protocol along with three different segmentation techniques, namely hysteresis, watershed and region growing, to determine the bias relative to image binarization. Metrics such as volume, 3D surface area and thickness were measured and comparison of both imaging modalities shows that X-ray CMT of biofilm using our protocol yields an accuracy that is comparable and even better in certain respects than TPLSM, even in a nonporous system that is largely favourable to TPLSM. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimnyakov, D. A., E-mail: zimnykov@sgu.ru; Sadovoi, A. V.; Vilenskii, M. A.
2009-02-15
Image sequences of the surface of disordered layers of porous medium (paper) obtained under noncoherent and coherent illumination during capillary rise of a liquid are analyzed. As a result, principles that govern the critical behavior of the interface between liquid and gaseous phases during its pinning are established. By a cumulant analysis of speckle-modulated images of the surface and by the statistical analysis of binarized difference images of the surface under noncoherent illumination, it is shown that the macroscopic dynamics of the interface at the stage of pinning is mainly controlled by the power law dependence of the appearance ratemore » of local instabilities (avalanches) of the interface on the critical parameter, whereas the growth dynamics of the local instabilities is controlled by the diffusion of a liquid in a layer and weakly depends on the critical parameter. A phenomenological model is proposed for the macroscopic dynamics of the phase interface for interpreting experimental data. The values of critical indices are determined that characterize the samples under test within this model. These values are compared with the results of numerical simulation for discrete models of directed percolation corresponding to the Kardar-Parisi-Zhang equation.« less
NASA Astrophysics Data System (ADS)
Martins, F.; Mahy, L.; Hervé, A.
2017-11-01
Context. A significant percentage of massive stars are found in multiple systems. The effect of binarity on stellar evolution is poorly constrained. In particular, the role of tides and mass transfer on surface chemical abundances is not constrained observationally. Aims: The aim of this study is to investigate the effect of binarity on the stellar properties and surface abundances of massive binaries. Methods: We performed a spectroscopic analysis of six Galactic massive binaries. We obtained the spectra of individual components via a spectral disentangling method and subsequently analyzed these spectra by means of atmosphere models. The stellar parameters and CNO surface abundances were determined. Results: Most of these six systems are comprised of main-sequence stars. Three systems are detached, two are in contact, and no information is available for the sixth system. For 11 out of the 12 stars studied, the surface abundances are only mildly affected by stellar evolution and mixing. The surface abundances are not different from those of single stars within the uncertainties. The secondary of XZ Cep is strongly chemically enriched. Considering previous determinations of surface abundances in massive binary systems suggests that the effect of tides on chemical mixing is limited, whereas the mass transfer and removal of outer layers of the mass donor leads to the appearance of chemically processed material at the surface, although this is not systematic. The evolutionary masses of the components of our six systems are on average 16.5% higher than the dynamical masses. Some systems seem to have reached synchronization, while others may still be in a transitory phase. Based on observations made with the SOPHIE spectrograph on the 1.93 m telescope at Observatoire de Haute-Provence (OHP, CNRS/AMU), France.
Binarity and Accretion in AGB Stars: HST/STIS Observations of UV Flickering in Y Gem
NASA Astrophysics Data System (ADS)
Sahai, R.; Sánchez Contreras, C.; Mangan, A. S.; Sanz-Forcada, J.; Muthumariappan, C.; Claussen, M. J.
2018-06-01
Binarity is believed to dramatically affect the history and geometry of mass loss in AGB and post-AGB stars, but observational evidence of binarity is sorely lacking. As part of a project to search for hot binary companions to cool AGB stars using the GALEX archive, we discovered a late-M star, Y Gem, to be a source of strong and variable UV and X-ray emission. Here we report UV spectroscopic observations of Y Gem obtained with the Hubble Space Telescope that show strong flickering in the UV continuum on timescales of ≲20 s, characteristic of an active accretion disk. Several UV lines with P-Cygni-type profiles from species such as Si IV and C IV are also observed, with emission and absorption features that are red- and blueshifted by velocities of ∼500 {km} {{{s}}}-1 from the systemic velocity. Our model for these (and previous) observations is that material from the primary star is gravitationally captured by a companion, producing a hot accretion disk. The latter powers a fast outflow that produces blueshifted features due to the absorption of UV continuum emitted by the disk, whereas the redshifted emission features arise in heated infalling material from the primary. The outflow velocities support a previous inference by Sahai et al. that Y Gem’s companion is a low-mass main-sequence star. Blackbody fitting of the UV continuum implies an accretion luminosity of about 13 L ⊙, and thus a mass-accretion rate >5 × 10‑7 M ⊙ yr‑1 we infer that Roche-lobe overflow is the most likely binary accretion mode for Y Gem.
Quantum Assisted Learning for Registration of MODIS Images
NASA Astrophysics Data System (ADS)
Pelissier, C.; Le Moigne, J.; Fekete, G.; Halem, M.
2017-12-01
The advent of the first large scale quantum annealer by D-Wave has led to an increased interest in quantum computing. However, the quantum annealing computer of the D-Wave is limited to either solving Quadratic Unconstrained Binary Optimization problems (QUBOs) or using the ground state sampling of an Ising system that can be produced by the D-Wave. These restrictions make it challenging to find algorithms to accelerate the computation of typical Earth Science applications. A major difficulty is that most applications have continuous real-valued parameters rather than binary. Here we present an exploratory study using the ground state sampling to train artificial neural networks (ANNs) to carry out image registration of MODIS images. The key idea to using the D-Wave to train networks is that the quantum chip behaves thermally like Boltzmann machines (BMs), and BMs are known to be successful at recognizing patterns in images. The ground state sampling of the D-Wave also depends on the dynamics of the adiabatic evolution and is subject to other non-thermal fluctuations, but the statistics are thought to be similar and ANNs tend to be robust under fluctuations. In light of this, the D-Wave ground state sampling is used to define a Boltzmann like generative model and is investigated to register MODIS images. Image intensities of MODIS images are transformed using a Discrete Cosine Transform and used to train a several layers network to learn how to align images to a reference image. The network layers consist of an initial sigmoid layer acting as a binary filter of the input followed by a strict binarization using Bernoulli sampling, and then fed into a Boltzmann machine. The output is then classified using a soft-max layer. Results are presented and discussed.
Study on key techniques for camera-based hydrological record image digitization
NASA Astrophysics Data System (ADS)
Li, Shijin; Zhan, Di; Hu, Jinlong; Gao, Xiangtao; Bo, Ping
2015-10-01
With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification. First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.
Contributions of nanoscale roughness to anomalous colloid retention and stability behavior
USDA-ARS?s Scientific Manuscript database
All natural surfaces exhibit nanoscale roughness (NR) and chemical heterogeneity (CH) to some extent. Expressions were developed to determine the mean interaction energy between a colloid and a solid-water interface (SWI), as well as for colloid-colloid interactions, when both surfaces contain binar...
Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest
NASA Astrophysics Data System (ADS)
Feng, W.; Sui, H.; Chen, X.
2018-04-01
Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.
Automatic detection and quantitative analysis of cells in the mouse primary motor cortex
NASA Astrophysics Data System (ADS)
Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui
2014-09-01
Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.
Robust estimation of simulated urinary volume from camera images under bathroom illumination.
Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji
2016-08-01
General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.
A new phase encoding approach for a compact head-up display
NASA Astrophysics Data System (ADS)
Suszek, Jaroslaw; Makowski, Michal; Sypek, Maciej; Siemion, Andrzej; Kolodziejczyk, Andrzej; Bartosz, Andrzej
2008-12-01
The possibility of encoding multiple asymmetric symbols into a single thin binary Fourier hologram would have a practical application in the design of simple translucent holographic head-up displays. A Fourier hologram displays the encoded images at the infinity so this enables an observation without a time-consuming eye accommodation. Presenting a set of the most crucial signs for a driver in this way is desired, especially by older people with various eyesight disabilities. In this paper a method of holographic design is presented that assumes a combination of a spatial segmentation and carrier frequencies. It allows to achieve multiple reconstructed images selectable by the angle of the incident laser beam. In order to encode several binary symbols into a single Fourier hologram, the chessboard shaped segmentation function is used. An optimized sequence of phase encoding steps and a final direct phase binarization enables recording of asymmetric symbols into a binary hologram. The theoretical analysis is presented, verified numerically and confirmed in the optical experiment. We suggest and describe a practical and highly useful application of such holograms in an inexpensive HUD device for the use of the automotive industry. We present two alternative propositions of car viewing setups.
Magnetic B stars observed with BRITE: Spots, magnetospheres, binarity, and pulsations
NASA Astrophysics Data System (ADS)
Wade, G. A.; Cohen, D. H.; Fletcher, C.; Handler, G.; Huang, L.; Krticka, J.; Neiner, C.; Niemczura, E.; Pablo, H.; Paunzen, E.; Petit, V.; Pigulski, A.; Rivinius, Th.; Rowe, J.; Rybicka, M.; Townsend, R.; Shultz, M.; Silvester, J.; Sikora, J.
2017-09-01
Magnetic B-type stars exhibit photometric variability due to diverse causes, and consequently on a variety of timescales. In this paper we describe interpretation of BRITE photometry and related ground-based observations of four magnetic B-type systems: ɛ Lupi, τ Sco, a Cen and ɛ CMa.
A simple and robust method for artifacts correction on X-ray microtomography images
NASA Astrophysics Data System (ADS)
Timofey, Sizonenko; Marina, Karsanina; Dina, Gilyazetdinova; Irina, Bayuk; Kirill, Gerke
2017-04-01
X-ray microtomography images of rock material often have some kinds of distortion due to different reasons such as X-ray attenuation, beam hardening, irregularity of distribution of liquid/solid phases. Several kinds of distortion can arise from further image processing and stitching of images from different measurements. Beam-hardening is a well-known and studied distortion which is relative easy to be described, fitted and corrected using a number of equations. However, this is not the case for other grey scale intensity distortions. Shading by irregularity of distribution of liquid phases, incorrect scanner operating/parameters choosing, as well as numerous artefacts from mathematical reconstructions from projections, including stitching from separate scans cannot be described using single mathematical model. To correct grey scale intensities on large 3D images we developed a package Traditional method for removing the beam hardening [1] has been modified in order to find the center of distortion. The main contribution of this work is in development of a method for arbitrary image correction. This method is based on fitting the distortion by Bezier curve using image histogram. The distortion along the image is represented by a number of Bezier curves and one base line that characterizes the natural distribution of gray value along the image. All of these curves are set manually by the operator. We have tested our approaches on different X-ray microtomography images of porous media. Arbitrary correction removes all principal distortion. After correction the images has been binarized with subsequent pore-network extracted. Equal distribution of pore-network elements along the image was the criteria to verify the proposed technique to correct grey scale intensities. [1] Iassonov, P. and Tuller, M., 2010. Application of segmentation for correction of intensity bias in X-ray computed tomography images. Vadose Zone Journal, 9(1), pp.187-191.
Document image retrieval through word shape coding.
Lu, Shijian; Li, Linlin; Tan, Chew Lim
2008-11-01
This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.
Different binarization processes validated against manual counts of fluorescent bacterial cells.
Tamminga, Gerrit G; Paulitsch-Fuchs, Astrid H; Jansen, Gijsbert J; Euverink, Gert-Jan W
2016-09-01
State of the art software methods (such as fixed value approaches or statistical approaches) to create a binary image of fluorescent bacterial cells are not as accurate and precise as they should be for counting bacteria and measuring their area. To overcome these bottlenecks, we introduce biological significance to obtain a binary image from a greyscale microscopic image. Using our biological significance approach we are able to automatically count about the same number of cells as an individual researcher would do by manual/visual counting. Using the fixed value or statistical approach to obtain a binary image leads to about 20% less cells in automatic counting. In our procedure we included the area measurements of the bacterial cells to determine the right parameters for background subtraction and threshold values. In an iterative process the threshold and background subtraction values were incremented until the number of particles smaller than a typical bacterial cell is less than the number of bacterial cells with a certain area. This research also shows that every image has a specific threshold with respect to the optical system, magnification and staining procedure as well as the exposure time. The biological significance approach shows that automatic counting can be performed with the same accuracy, precision and reproducibility as manual counting. The same approach can be used to count bacterial cells using different optical systems (Leica, Olympus and Navitar), magnification factors (200× and 400×), staining procedures (DNA (Propidium Iodide) and RNA (FISH)) and substrates (polycarbonate filter or glass). Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Menesatti, P.; D'Andrea, S.; Socciarelli, S.
2007-09-01
The work focused the application of an image analysis technique to determine corn leaves morphology as objective indicator of the growth performance of corn (Zea mays) resulting from the urban residual fertilization. The analyses were related to six fertilization plots: original soil; chemical fertilizer (160 and 200 kg ha-1 of nitrogen); organic fertilizer (32 t ha-1) and two different doses of urban residues (sewage sludges) (7.5 and 22.5 t ha-1, this last amount corresponds to is the maximum level permitted from the Italian law in three year of fertilization). Those tests were realized by full randomized plots, with two three repetitions for each treatment. Measurements were performed for the first year of the trials in the period proximate to harvest (Rome, Italy - July 2000). Four plants for each plot were harvested and stripped of all leaves, whose RGB images were acquired by a digital photo camera (Kodak Ltd). Image analysis was performed first through the separation of RGB channels into single monochromatic 8-bit distribution, than the blue channel images, the most informative, were then submitted to enhancement, low pass filtering to reduce noise, threshold of binarization (based on statistical parameter affected on Gaussian grey levels distribution), binary morphology and object measurement. For ach single leaf the length, the width, the area were measured. The test results indicated positive and significant responses in relation between the crop growth (leaves area, length and width greater) and the different doses of urban residues (sewage sludges).
Xu, Feng; Beyazoglu, Turker; Hefner, Evan; Gurkan, Umut Atakan
2011-01-01
Cellular alignment plays a critical role in functional, physical, and biological characteristics of many tissue types, such as muscle, tendon, nerve, and cornea. Current efforts toward regeneration of these tissues include replicating the cellular microenvironment by developing biomaterials that facilitate cellular alignment. To assess the functional effectiveness of the engineered microenvironments, one essential criterion is quantification of cellular alignment. Therefore, there is a need for rapid, accurate, and adaptable methodologies to quantify cellular alignment for tissue engineering applications. To address this need, we developed an automated method, binarization-based extraction of alignment score (BEAS), to determine cell orientation distribution in a wide variety of microscopic images. This method combines a sequenced application of median and band-pass filters, locally adaptive thresholding approaches and image processing techniques. Cellular alignment score is obtained by applying a robust scoring algorithm to the orientation distribution. We validated the BEAS method by comparing the results with the existing approaches reported in literature (i.e., manual, radial fast Fourier transform-radial sum, and gradient based approaches). Validation results indicated that the BEAS method resulted in statistically comparable alignment scores with the manual method (coefficient of determination R2=0.92). Therefore, the BEAS method introduced in this study could enable accurate, convenient, and adaptable evaluation of engineered tissue constructs and biomaterials in terms of cellular alignment and organization. PMID:21370940
NASA Astrophysics Data System (ADS)
Ziegler, Carl; Law, Nicholas M.; Morton, Tim; Baranec, Christoph; Riddle, Reed; Atkinson, Dani; Baker, Anna; Roberts, Sarah; Ciardi, David R.
2017-02-01
The Robo-AO Kepler Planetary Candidate Survey is observing every Kepler planet candidate host star with laser adaptive optics imaging to search for blended nearby stars, which may be physically associated companions and/or responsible for transit false positives. In this paper, we present the results of our search for stars nearby 1629 Kepler planet candidate hosts. With survey sensitivity to objects as close as ˜0.″15, and magnitude differences Δm ≤slant 6, we find 223 stars in the vicinity of 206 target KOIs; 209 of these nearby stars have not been previously imaged in high resolution. We measure an overall nearby-star probability for Kepler planet candidates of 12.6 % +/- 0.9 % at separations between 0.″15 and 4.″0. Particularly interesting KOI systems are discussed, including 26 stars with detected companions that host rocky, habitable zone candidates and five new candidate planet-hosting quadruple star systems. We explore the broad correlations between planetary systems and stellar binarity, using the combined data set of Baranec et al. and this paper. Our previous 2σ result of a low detected nearby star fraction of KOIs hosting close-in giant planets is less apparent in this larger data set. We also find a significant correlation between detected nearby star fraction and KOI number, suggesting possible variation between early and late Kepler data releases.
Crossing Boundaries in Literacy Research: Challenges and Opportunities
ERIC Educational Resources Information Center
Almasi, Janice F.
2016-01-01
The concept of boundaries, as borders that separate two entities, can be problematic in that those who "belong" within a boundary in a given social world or entity are separated from those who do not, creating binaries. The field of literacy research has had a long history of binarism in which instructional methods, epistemological…
Finding Queer Allies: The Impact of Ally Training and Safe Zone Stickers on Campus Climate
ERIC Educational Resources Information Center
Ballard, Stephanie L.; Bartle, Eli; Masequesmay, Gina
2008-01-01
To counter heterosexism, homophobia, and gender binarism in higher education, "safe zone" or "ally" programs are efforts by American universities to create a welcoming environment for lesbian, gay, bisexual, transgender, queer or questioning (LGBTQ) members of the campus community. This study describes perceptions of campus…
Numerical stability of the error diffusion concept
NASA Astrophysics Data System (ADS)
Weissbach, Severin; Wyrowski, Frank
1992-10-01
The error diffusion algorithm is an easy implementable mean to handle nonlinearities in signal processing, e.g. in picture binarization and coding of diffractive elements. The numerical stability of the algorithm depends on the choice of the diffusion weights. A criterion for the stability of the algorithm is presented and evaluated for some examples.
David, Ortiz P; Sierra-Sosa, Daniel; Zapirain, Begoña García
2017-01-06
Pressure ulcers have become subject of study in recent years due to the treatment high costs and decreased life quality from patients. These chronic wounds are related to the global life expectancy increment, being the geriatric and physical disable patients the principal affected by this condition. Injuries diagnosis and treatment usually takes weeks or even months by medical personel. Using non-invasive techniques, such as image processing techniques, it is possible to conduct an analysis from ulcers and aid in its diagnosis. This paper proposes a novel technique for image segmentation based on contrast changes by using synthetic frequencies obtained from the grayscale value available in each pixel of the image. These synthetic frequencies are calculated using the model of energy density over an electric field to describe a relation between a constant density and the image amplitude in a pixel. A toroidal geometry is used to decompose the image into different contrast levels by variating the synthetic frequencies. Then, the decomposed image is binarized applying Otsu's threshold allowing for obtaining the contours that describe the contrast variations. Morphological operations are used to obtain the desired segment of the image. The proposed technique is evaluated by synthesizing a Data Base with 51 images of pressure ulcers, provided by the Centre IGURCO. With the segmentation of these pressure ulcer images it is possible to aid in its diagnosis and treatment. To provide evidences of technique performance, digital image correlation was used as a measure, where the segments obtained using the methodology are compared with the real segments. The proposed technique is compared with two benchmarked algorithms. The results over the technique present an average correlation of 0.89 with a variation of ±0.1 and a computational time of 9.04 seconds. The methodology presents better segmentation results than the benchmarked algorithms using less computational time and without the need of an initial condition.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hatkopf, William I.; Raghavan, Deepak
2008-02-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is an effort to address both their positive and negative aspects, through speckle interferometric observations, targeting ~1200 systems where useful information can be obtained with only a single additional observation. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Raghavan's Ph.D. thesis, which is a comprehensive survey aimed at determining the multiplicity fraction among solar-type stars.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Raghavan, Deepak
2007-08-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is an effort to address both their positive and negative aspects, through speckle interferometric observations, targeting ~1200 systems where useful information can be obtained with only a single additional observation. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Raghavan's Ph.D. thesis, which is a comprehensive survey aimed at determining the multiplicity fraction among solar-type stars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jatmiko, A. T. P.; Puannandra, G. P.; Hapsari, R. D.
Lunar Occultation (LO) is an event where limb of the Moon passing over a particular heavenly bodies such as stars, asteroids, or planets. In other words, during the event, stars, asteroids and planets are occulted by the Moon. When occulted objects contact the lunar limb, there will be a diffraction fringe(s) which can be measured photometrically, until the signal vanishes into noise. This event will give us a valuable information about binarities (of stars) and/or angular diameters estimation (of stars, planets, asteroids) in milliarcsecond resolution, by fitting with theoretical LO pattern. CCDs are common for LO observation because of itsmore » fast read out, and recently are developed for sub-meter class telescope. In this paper, our LO observation attempt of μ Sgr and its progress report are presented. The observation was conducted on July 30{sup th}, 2012 at Bosscha Observatory, Indonesia, using 45cm f/12 GOTO telescope combined with ST-9 XE CCD camera and Bessel B filter. We used drift-scan method to obtain light curve of the star as it was disappearing behind Moon's dark limb. Our goal is to detect binarity (or multiplicity) of this particular object.« less
A Double Dwell High Sensitivity GPS Acquisition Scheme Using Binarized Convolution Neural Network
Wang, Zhen; Zhuang, Yuan; Yang, Jun; Zhang, Hengfeng; Dong, Wei; Wang, Min; Hua, Luchi; Liu, Bo; Shi, Longxing
2018-01-01
Conventional GPS acquisition methods, such as Max selection and threshold crossing (MAX/TC), estimate GPS code/Doppler by its correlation peak. Different from MAX/TC, a multi-layer binarized convolution neural network (BCNN) is proposed to recognize the GPS acquisition correlation envelope in this article. The proposed method is a double dwell acquisition in which a short integration is adopted in the first dwell and a long integration is applied in the second one. To reduce the search space for parameters, BCNN detects the possible envelope which contains the auto-correlation peak in the first dwell to compress the initial search space to 1/1023. Although there is a long integration in the second dwell, the acquisition computation overhead is still low due to the compressed search space. Comprehensively, the total computation overhead of the proposed method is only 1/5 of conventional ones. Experiments show that the proposed double dwell/correlation envelope identification (DD/CEI) neural network achieves 2 dB improvement when compared with the MAX/TC under the same specification. PMID:29747373
Automatic lumen segmentation in IVOCT images using binary morphological reconstruction
2013-01-01
Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation. PMID:23937790
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latief, Fourier Dzar Eljabbar, E-mail: fourier@fi.itb.ac.id; Dewi, Dyah Ekashanti Octorina; Shari, Mohd Aliff Bin Mohd
Micro Computed Tomography (μCT) has been largely used to perform micrometer scale imaging of specimens, bone biopsies and small animals for the study of porous or cavity-containing objects. One of its favored applications is for assessing structural properties of bone. In this research, we perform a pilot study to visualize and characterize bone structure of a chicken bone thigh, as well as to delineate its cortical and trabecular bone regions. We utilize an In-Vitro μCT scanner Skyscan 1173 to acquire a three dimensional image data of a chicken bone thigh. The thigh was scanned using X-ray voltage of 45 kVmore » and current of 150 μA. The reconstructed images have spatial resolution of 142.50 μm/pixel. Using image processing and analysis e.i segmentation by thresholding the gray values (which represent the pseudo density) and binarizing the images, we were able to visualize each part of the bone, i.e., the cortical and trabecular regions. Total volume of the bone is 4663.63 mm{sup 3}, and the surface area of the bone is 7913.42 mm{sup 2}. The volume of the cortical is approximately 1988.62 mm{sup 3} which is nearly 42.64% of the total bone volume. This pilot study has confirmed that the μCT is capable of quantifying 3D bone structural properties and defining its regions separately. For further development, these results can be improved for understanding the pathophysiology of bone abnormality, testing the efficacy of pharmaceutical intervention, or estimating bone biomechanical properties.« less
Integrated segmentation of cellular structures
NASA Astrophysics Data System (ADS)
Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo
2011-03-01
Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.
Jitaree, Sirinapa; Phinyomark, Angkoon; Boonyaphiphat, Pleumjit; Phukpattaranont, Pornchai
2015-01-01
Having a classifier of cell types in a breast cancer microscopic image (BCMI), obtained with immunohistochemical staining, is required as part of a computer-aided system that counts the cancer cells in such BCMI. Such quantitation by cell counting is very useful in supporting decisions and planning of the medical treatment of breast cancer. This study proposes and evaluates features based on texture analysis by fractal dimension (FD), for the classification of histological structures in a BCMI into either cancer cells or non-cancer cells. The cancer cells include positive cells (PC) and negative cells (NC), while the normal cells comprise stromal cells (SC) and lymphocyte cells (LC). The FD feature values were calculated with the box-counting method from binarized images, obtained by automatic thresholding with Otsu's method of the grayscale images for various color channels. A total of 12 color channels from four color spaces (RGB, CIE-L*a*b*, HSV, and YCbCr) were investigated, and the FD feature values from them were used with decision tree classifiers. The BCMI data consisted of 1,400, 1,200, and 800 images with pixel resolutions 128 × 128, 192 × 192, and 256 × 256, respectively. The best cross-validated classification accuracy was 93.87%, for distinguishing between cancer and non-cancer cells, obtained using the Cr color channel with window size 256. The results indicate that the proposed algorithm, based on fractal dimension features extracted from a color channel, performs well in the automatic classification of the histology in a BCMI. This might support accurate automatic cell counting in a computer-assisted system for breast cancer diagnosis. © Wiley Periodicals, Inc.
THE PROPERTIES OF DYNAMICALLY EJECTED RUNAWAY AND HYPER-RUNAWAY STARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perets, Hagai B.; Subr, Ladislav
2012-06-01
Runaway stars are stars observed to have large peculiar velocities. Two mechanisms are thought to contribute to the ejection of runaway stars, both of which involve binarity (or higher multiplicity). In the binary supernova scenario, a runaway star receives its velocity when its binary massive companion explodes as a supernova (SN). In the alternative dynamical ejection scenario, runaway stars are formed through gravitational interactions between stars and binaries in dense, compact clusters or cluster cores. Here we study the ejection scenario. We make use of extensive N-body simulations of massive clusters, as well as analytic arguments, in order to characterizemore » the expected ejection velocity distribution of runaway stars. We find that the ejection velocity distribution of the fastest runaways (v {approx}> 80 km s{sup -1}) depends on the binary distribution in the cluster, consistent with our analytic toy model, whereas the distribution of lower velocity runaways appears independent of the binaries' properties. For a realistic log constant distribution of binary separations, we find the velocity distribution to follow a simple power law: {Gamma}(v){proportional_to}v{sup -8/3} for the high-velocity runaways and v{sup -3/2} for the low-velocity ones. We calculate the total expected ejection rates of runaway stars from our simulated massive clusters and explore their mass function and their binarity. The mass function of runaway stars is biased toward high masses and strongly depends on their velocity. The binarity of runaways is a decreasing function of their ejection velocity, with no binaries expected to be ejected with v > 150 km s{sup -1}. We also find that hyper-runaways with velocities of hundreds of km s{sup -1} can be dynamically ejected from stellar clusters, but only at very low rates, which cannot account for a significant fraction of the observed population of hyper-velocity stars in the Galactic halo.« less
Warped document image correction method based on heterogeneous registration strategies
NASA Astrophysics Data System (ADS)
Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan
2013-03-01
With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.
A super resolution framework for low resolution document image OCR
NASA Astrophysics Data System (ADS)
Ma, Di; Agam, Gady
2013-01-01
Optical character recognition is widely used for converting document images into digital media. Existing OCR algorithms and tools produce good results from high resolution, good quality, document images. In this paper, we propose a machine learning based super resolution framework for low resolution document image OCR. Two main techniques are used in our proposed approach: a document page segmentation algorithm and a modified K-means clustering algorithm. Using this approach, by exploiting coherence in the document, we reconstruct from a low resolution document image a better resolution image and improve OCR results. Experimental results show substantial gain in low resolution documents such as the ones captured from video.
Selecta: The Journal of the Pacific Northwest Council for Languages, 1999.
ERIC Educational Resources Information Center
Nickisch, Craig W., Ed.
1999-01-01
This issue contains three articles, one each in the area of classical, French, and German literature respectively: "Treatments of 'Furor' and 'Ira' and the End of Vergil's 'Aeneid'" (James M. Scott); "Ambiguity and Binarism in Georges Feydeau's 'La Puce a l'oreille'" (James Mills); and, in German, "Die Fahne hoch! Das Horst-Wessel Lied als…
Radial Velocities and Binarity of Southern SIM Grid Stars
2015-01-01
the range 0.005–0.008M of companion mass, thus, these data should be sensitive to super- Jupiters and brown dwarf companions in 1-yr orbits. The absence...freedom. Hence, these results do not preclude the exis- tence of a limited number (∼20) of super- Jupiter planets or brown dwarf companions in our
Change Detection via Selective Guided Contrasting Filters
NASA Astrophysics Data System (ADS)
Vizilter, Y. V.; Rubis, A. Y.; Zheltov, S. Y.
2017-05-01
Change detection scheme based on guided contrasting was previously proposed. Guided contrasting filter takes two images (test and sample) as input and forms the output as filtered version of test image. Such filter preserves the similar details and smooths the non-similar details of test image with respect to sample image. Due to this the difference between test image and its filtered version (difference map) could be a basis for robust change detection. Guided contrasting is performed in two steps: at the first step some smoothing operator (SO) is applied for elimination of test image details; at the second step all matched details are restored with local contrast proportional to the value of some local similarity coefficient (LSC). The guided contrasting filter was proposed based on local average smoothing as SO and local linear correlation as LSC. In this paper we propose and implement new set of selective guided contrasting filters based on different combinations of various SO and thresholded LSC. Linear average and Gaussian smoothing, nonlinear median filtering, morphological opening and closing are considered as SO. Local linear correlation coefficient, morphological correlation coefficient (MCC), mutual information, mean square MCC and geometrical correlation coefficients are applied as LSC. Thresholding of LSC allows operating with non-normalized LSC and enhancing the selective properties of guided contrasting filters: details are either totally recovered or not recovered at all after the smoothing. These different guided contrasting filters are tested as a part of previously proposed change detection pipeline, which contains following stages: guided contrasting filtering on image pyramid, calculation of difference map, binarization, extraction of change proposals and testing change proposals using local MCC. Experiments on real and simulated image bases demonstrate the applicability of all proposed selective guided contrasting filters. All implemented filters provide the robustness relative to weak geometrical discrepancy of compared images. Selective guided contrasting based on morphological opening/closing and thresholded morphological correlation demonstrates the best change detection result.
Multimodal image registration based on binary gradient angle descriptor.
Jiang, Dongsheng; Shi, Yonghong; Yao, Demin; Fan, Yifeng; Wang, Manning; Song, Zhijian
2017-12-01
Multimodal image registration plays an important role in image-guided interventions/therapy and atlas building, and it is still a challenging task due to the complex intensity variations in different modalities. The paper addresses the problem and proposes a simple, compact, fast and generally applicable modality-independent binary gradient angle descriptor (BGA) based on the rationale of gradient orientation alignment. The BGA can be easily calculated at each voxel by coding the quadrant in which a local gradient vector falls, and it has an extremely low computational complexity, requiring only three convolutions, two multiplication operations and two comparison operations. Meanwhile, the binarized encoding of the gradient orientation makes the BGA more resistant to image degradations compared with conventional gradient orientation methods. The BGA can extract similar feature descriptors for different modalities and enable the use of simple similarity measures, which makes it applicable within a wide range of optimization frameworks. The results for pairwise multimodal and monomodal registrations between various images (T1, T2, PD, T1c, Flair) consistently show that the BGA significantly outperforms localized mutual information. The experimental results also confirm that the BGA can be a reliable alternative to the sum of absolute difference in monomodal image registration. The BGA can also achieve an accuracy of [Formula: see text], similar to that of the SSC, for the deformable registration of inhale and exhale CT scans. Specifically, for the highly challenging deformable registration of preoperative MRI and 3D intraoperative ultrasound images, the BGA achieves a similar registration accuracy of [Formula: see text] compared with state-of-the-art approaches, with a computation time of 18.3 s per case. The BGA improves the registration performance in terms of both accuracy and time efficiency. With further acceleration, the framework has the potential for application in time-sensitive clinical environments, such as for preoperative MRI and intraoperative US image registration for image-guided intervention.
Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.
Kahn, Charles E
2008-09-01
Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.
Imaging Survey of Subsystems in Secondary Components to Nearby Southern Dwarfs
NASA Astrophysics Data System (ADS)
Tokovinin, Andrei
2014-10-01
To improve the statistics of hierarchical multiplicity, secondary components of wide nearby binaries with solar-type primaries were surveyed at the SOAR telescope for evaluating the frequency of subsystems. Images of 17 faint secondaries were obtained with the SOAR Adaptive Module that improved the seeing; one new 0.''2 binary was detected. For all targets, photometry in the g', i', z' bands is given. Another 46 secondaries were observed by speckle interferometry, resolving 7 close subsystems. Adding literature data, the binarity of 95 secondary components is evaluated. We found that the detection-corrected frequency of secondary subsystems with periods in the well-surveyed range from 103 to 105 days is 0.21 ± 0.06—same as the normal frequency of such binaries among solar-type stars, 0.18. This indicates that wide binaries are unlikely to be produced by dynamical evolution of N-body systems, but are rather formed by fragmentation. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory, the University of North Carolina at Chapel Hill, and Michigan State University.
Interactive degraded document enhancement and ground truth generation
NASA Astrophysics Data System (ADS)
Bal, G.; Agam, G.; Frieder, O.; Frieder, G.
2008-01-01
Degraded documents are frequently obtained in various situations. Examples of degraded document collections include historical document depositories, document obtained in legal and security investigations, and legal and medical archives. Degraded document images are hard to to read and are hard to analyze using computerized techniques. There is hence a need for systems that are capable of enhancing such images. We describe a language-independent semi-automated system for enhancing degraded document images that is capable of exploiting inter- and intra-document coherence. The system is capable of processing document images with high levels of degradations and can be used for ground truthing of degraded document images. Ground truthing of degraded document images is extremely important in several aspects: it enables quantitative performance measurements of enhancement systems and facilitates model estimation that can be used to improve performance. Performance evaluation is provided using the historical Frieder diaries collection.1
Segmentation of corneal endothelium images using a U-Net-based convolutional neural network.
Fabijańska, Anna
2018-04-18
Diagnostic information regarding the health status of the corneal endothelium may be obtained by analyzing the size and the shape of the endothelial cells in specular microscopy images. Prior to the analysis, the endothelial cells need to be extracted from the image. Up to today, this has been performed manually or semi-automatically. Several approaches to automatic segmentation of endothelial cells exist; however, none of them is perfect. Therefore this paper proposes to perform cell segmentation using a U-Net-based convolutional neural network. Particularly, the network is trained to discriminate pixels located at the borders between cells. The edge probability map outputted by the network is next binarized and skeletonized in order to obtain one-pixel wide edges. The proposed solution was tested on a dataset consisting of 30 corneal endothelial images presenting cells of different sizes, achieving an AUROC level of 0.92. The resulting DICE is on average equal to 0.86, which is a good result, regarding the thickness of the compared edges. The corresponding mean absolute percentage error of cell number is at the level of 4.5% which confirms the high accuracy of the proposed approach. The resulting cell edges are well aligned to the ground truths and require a limited number of manual corrections. This also results in accurate values of the cell morphometric parameters. The corresponding errors range from 5.2% for endothelial cell density, through 6.2% for cell hexagonality to 11.93% for the coefficient of variation of the cell size. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Yang; Wang, Qianqian
2008-12-01
When laser ranger is transported or used in field operations, the transmitting axis, receiving axis and aiming axis may be not parallel. The nonparallelism of the three-light-axis will affect the range-measuring ability or make laser ranger not be operated exactly. So testing and adjusting the three-light-axis parallelity in the production and maintenance of laser ranger is important to ensure using laser ranger reliably. The paper proposes a new measurement method using digital image processing based on the comparison of some common measurement methods for the three-light-axis parallelity. It uses large aperture off-axis paraboloid reflector to get the images of laser spot and white light cross line, and then process the images on LabVIEW platform. The center of white light cross line can be achieved by the matching arithmetic in LABVIEW DLL. And the center of laser spot can be achieved by gradation transformation, binarization and area filter in turn. The software system can set CCD, detect the off-axis paraboloid reflector, measure the parallelity of transmitting axis and aiming axis and control the attenuation device. The hardware system selects SAA7111A, a programmable vedio decoding chip, to perform A/D conversion. FIFO (first-in first-out) is selected as buffer.USB bus is used to transmit data to PC. The three-light-axis parallelity can be achieved according to the position bias between them. The device based on this method has been already used. The application proves this method has high precision, speediness and automatization.
"The Boy in the Dress": Queering Mantle of the Expert
ERIC Educational Resources Information Center
Terret, Liselle
2013-01-01
In this paper I offer a queer analysis of several key moments during a Mantle of the Expert (MoE) project that resulted in Year 5 children creating performances and engaging with heightened versions of gendered femininity in their primary school. I will refer to theoretical notions of transvestism as a means of challenging the notions of binarism,…
ERIC Educational Resources Information Center
Kansu-Yetkiner, Neslihan
2014-01-01
This article examines the political polarization between Republicans and Islamists in Turkey as reflected in the peritexts of recent translations of world children's literature. This is reflected in terms of van Dijk's notions of an us vs them binarism, where a positive in-group is opposed to a negative out-group representation. In this way, the…
NASA Astrophysics Data System (ADS)
Smith, Alexander; De Marco, O.
2007-12-01
Recent observational evidence and theoretical models are challenging the classical paradigm of single star planetary nebula (PN) evolution, suggesting instead that binary stars play a significant role in the process of PN formation. In order to shape the 90% of PN that are non-spherical, the central star must be rotating and have a magnetic field; the most-likely source of the angular momentum needed to sustain magnetic fields is a binary companion. More observational evidence is needed to confirm that the fraction of PN with close binary central stars is indeed higher than the currently known value of 10-15%. As part of an international effort to detect binary central stars (PLAN-B - Panetary Nebula Binaries), we are carrying out a new photometric survey to look for close binary central stars of PN. Here we present the findings for 4 objects: A 43, A 74, NGC 6720, and NGC 6853. NGC 6720 and NGC 6853 show evidence of periodic variability, the former of which might even show one eclipse. Once completed, the survey will assess the binarity of about 100 central stars of PN.
Automated detection scheme of architectural distortion in mammograms using adaptive Gabor filter
NASA Astrophysics Data System (ADS)
Yoshikawa, Ruriha; Teramoto, Atsushi; Matsubara, Tomoko; Fujita, Hiroshi
2013-03-01
Breast cancer is a serious health concern for all women. Computer-aided detection for mammography has been used for detecting mass and micro-calcification. However, there are challenges regarding the automated detection of the architectural distortion about the sensitivity. In this study, we propose a novel automated method for detecting architectural distortion. Our method consists of the analysis of the mammary gland structure, detection of the distorted region, and reduction of false positive results. We developed the adaptive Gabor filter for analyzing the mammary gland structure that decides filter parameters depending on the thickness of the gland structure. As for post-processing, healthy mammary glands that run from the nipple to the chest wall are eliminated by angle analysis. Moreover, background mammary glands are removed based on the intensity output image obtained from adaptive Gabor filter. The distorted region of the mammary gland is then detected as an initial candidate using a concentration index followed by binarization and labeling. False positives in the initial candidate are eliminated using 23 types of characteristic features and a support vector machine. In the experiments, we compared the automated detection results with interpretations by a radiologist using 50 cases (200 images) from the Digital Database of Screening Mammography (DDSM). As a result, true positive rate was 82.72%, and the number of false positive per image was 1.39. There results indicate that the proposed method may be useful for detecting architectural distortion in mammograms.
Visual one-shot learning as an 'anti-camouflage device': a novel morphing paradigm.
Ishikawa, Tetsuo; Mogi, Ken
2011-09-01
Once people perceive what is in the hidden figure such as Dallenbach's cow and Dalmatian, they seldom seem to come back to the previous state when they were ignorant of the answer. This special type of learning process can be accomplished in a short time, with the effect of learning lasting for a long time (visual one-shot learning). Although it is an intriguing cognitive phenomenon, the lack of the control of difficulty of stimuli presented has been a problem in research. Here we propose a novel paradigm to create new hidden figures systematically by using a morphing technique. Through gradual changes from a blurred and binarized two-tone image to a blurred grayscale image of the original photograph including objects in a natural scene, spontaneous one-shot learning can occur at a certain stage of morphing when a sufficient amount of information is restored to the degraded image. A negative correlation between confidence levels and reaction times is observed, giving support to the fluency theory of one-shot learning. The correlation between confidence ratings and correct recognition rates indicates that participants had an accurate introspective ability (metacognition). The learning effect could be tested later by verifying whether or not the target object was recognized quicker in the second exposure. The present method opens a way for a systematic production of "good" hidden figures, which can be used to demystify the nature of visual one-shot learning.
Script identification from images using cluster-based templates
Hochberg, J.G.; Kelly, P.M.; Thomas, T.R.
1998-12-01
A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script. 17 figs.
Script identification from images using cluster-based templates
Hochberg, Judith G.; Kelly, Patrick M.; Thomas, Timothy R.
1998-01-01
A computer-implemented method identifies a script used to create a document. A set of training documents for each script to be identified is scanned into the computer to store a series of exemplary images representing each script. Pixels forming the exemplary images are electronically processed to define a set of textual symbols corresponding to the exemplary images. Each textual symbol is assigned to a cluster of textual symbols that most closely represents the textual symbol. The cluster of textual symbols is processed to form a representative electronic template for each cluster. A document having a script to be identified is scanned into the computer to form one or more document images representing the script to be identified. Pixels forming the document images are electronically processed to define a set of document textual symbols corresponding to the document images. The set of document textual symbols is compared to the electronic templates to identify the script.
NASA Astrophysics Data System (ADS)
Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.
1999-10-01
Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited; e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.
Document image database indexing with pictorial dictionary
NASA Astrophysics Data System (ADS)
Akbari, Mohammad; Azimi, Reza
2010-02-01
In this paper we introduce a new approach for information retrieval from Persian document image database without using Optical Character Recognition (OCR).At first an attribute called subword upper contour label is defined then, a pictorial dictionary is constructed based on this attribute for the subwords. By this approach we address two issues in document image retrieval: keyword spotting and retrieval according to the document similarities. The proposed methods have been evaluated on a Persian document image database. The results have proved the ability of this approach in document image information retrieval.
Imaged Document Optical Correlation and Conversion System (IDOCCS)
NASA Astrophysics Data System (ADS)
Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.
1999-03-01
Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). In addition, many organizations are converting their paper archives to electronic images, which are stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources. The Imaged Document Optical Correlation and Conversion System (IDOCCS) provides a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval capability of document images. The IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and can even determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo, or documents with a particular individual's signature block, can be singled out. With this dual capability, IDOCCS outperforms systems that rely on optical character recognition as a basis for indexing and storing only the textual content of documents for later retrieval.
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2014 CFR
2014-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2012 CFR
2012-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
32 CFR 813.5 - Shipping or transmitting visual information documentation images.
Code of Federal Regulations, 2010 CFR
2010-07-01
... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...
32 CFR 813.5 - Shipping or transmitting visual information documentation images.
Code of Federal Regulations, 2013 CFR
2013-07-01
... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2010 CFR
2010-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
32 CFR 813.5 - Shipping or transmitting visual information documentation images.
Code of Federal Regulations, 2011 CFR
2011-07-01
... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2011 CFR
2011-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
32 CFR 813.5 - Shipping or transmitting visual information documentation images.
Code of Federal Regulations, 2012 CFR
2012-07-01
... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...
6 CFR 37.31 - Source document retention.
Code of Federal Regulations, 2013 CFR
2013-01-01
... keep digital images of source documents must retain the images for a minimum of ten years. (4) States... using digital imaging to retain source documents must store the images as follows: (1) Photo images must be stored in the Joint Photographic Experts Group (JPEG) 2000 standard for image compression, or a...
32 CFR 813.5 - Shipping or transmitting visual information documentation images.
Code of Federal Regulations, 2014 CFR
2014-07-01
... documentation images. 813.5 Section 813.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE... visual information documentation images. (a) COMCAM images. Send COMCAM images to the DoD Joint Combat... the approval procedures that on-scene and theater commanders set. (b) Other non-COMCAM images. After...
XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital.
Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Choi, Young Hwan; Cho, Yong Kyun
2013-12-01
The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE.
XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital
Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Cho, Yong Kyun
2013-01-01
Objectives The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Methods Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. Results The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Conclusions Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE. PMID:24523994
Am stars and the influence of binarity on infall
NASA Astrophysics Data System (ADS)
Cowley, Charles R.
2016-01-01
We explore an old idea for the origin of Am star anomalies, possibly related to observations of pollution in white dwarfs (Jura & Young, ARAA, 42, 45, 2014; Gansicke, et al., Arxiv:1505.03142). It must be noted that infall of an earthlike body can explain some, but not all of the abundance anomalies of Am stars.The ingestion of earthlike material by an a star should have observable effects that are larger than for solar-type stars. We follow dynamical arguments discussed, e.g. by Debes, et al. ApJ., 747, 148, 2012), and postulate that gravitational interactions will produce an infalling stream of low angularmomentum bodies.Note that most if not all Am stars are binary. Here we investigate only whether there is an increased frequency of collisions with a close binary relative to a single star.We make quantitative estimates, using analytical 2-body solutions and restricted 3-body calculations with parameters similar to those of the eclipsing Am pair Beta Aur,or WW Aur. We use initial values for the binary similar to those which would lead to a certain collision on a (4M_sun) single star for a parabolic trajectory. All calculations begin with a distance from the center of mass along the axis of a paraboloid of revolution at 3 or 5 AU and such that a marginal collision occurs with a single star. The perpendicular area of this figure is a cross section for a collision. We sample trajectories starting within and near this cross section, for double starsystems. Based on many trials we find it about equally likely-- relative to a single star--that an incoming body will be ejected from the system than that it will collide with one of the stars. Although we have sampled only a fraction of possible parameter space, we find no basis to expect that the binarity of the Am systems makes them more likely to have ingested planetary material.Infall should probably still be considered, along with the generally accepted diffusion scenario, but it does not appear that the binarity of Am stars makes infall significantly more relevant.
Adaptive removal of background and white space from document images using seam categorization
NASA Astrophysics Data System (ADS)
Fillion, Claude; Fan, Zhigang; Monga, Vishal
2011-03-01
Document images are obtained regularly by rasterization of document content and as scans of printed documents. Resizing via background and white space removal is often desired for better consumption of these images, whether on displays or in print. While white space and background are easy to identify in images, existing methods such as naïve removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content from document images. Document images are different from pictorial images in structure. They typically contain objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing is required, as they introduce less change in document content and style, compared with deletion of object pixels. We propose a background deletion method that exploits both local and global context. The method aims to retain the document structural information and image quality.
Imaged document information location and extraction using an optical correlator
NASA Astrophysics Data System (ADS)
Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.
1999-12-01
Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). Many of these organizations are converting their paper archives to electronic images, which are then stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources and provide for rapid access to the information contained within these imaged documents. To meet this need, Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provide a means for the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and has the potential to determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.
ERIC Educational Resources Information Center
Bruley, Karina
1996-01-01
Provides a checklist of considerations for installing document image processing with an electronic document management system. Other topics include scanning; indexing; the image file life cycle; benefits of imaging; document-driven workflow; and planning for workplace changes like postsorting, creating a scanning room, redeveloping job tasks and…
Content-based retrieval of historical Ottoman documents stored as textual images.
Saykol, Ediz; Sinop, Ali Kemal; Güdükbay, Ugur; Ulusoy, Ozgür; Cetin, A Enis
2004-03-01
There is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. In this paper, a framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domain based on angular and distance span of shapes are used to extract the symbols. In order to make content-based retrieval in historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in textual images. The querying process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.
Investigation into flow boiling heat transfer in a minichannel with enhanced heating surface
NASA Astrophysics Data System (ADS)
Piasecka, Magdalena
2012-04-01
The paper presents results of flow boiling in a minichannel of 1.0 mm depth. The heating element for the working fluid (FC-72) that flows along the minichannel is a single-sided enhanced alloy foil made from Haynes-230. Microrecesses were formed on the selected area of the heating foil by laser technology. The observations of the flow structure were carried out through a piece of glass. Simultaneously, owing to the liquid crystal layer placed on the opposite side of the enhanced foil surface, it was possible to measure temperature distribution on the heating wall through another piece of glass. The experimental research has been focused on the transition from single phase forced convection to nucleate boiling, i.e. the zone of boiling incipience and further development of boiling. The objective of the paper is determining of the void fraction for some cross-sections of selected images for increasing heat fluxes supplied to the heating surface. The flow structure photos were processed in Corel graphics software and binarized. The analysis of phase volumes was developed in Techystem Globe software.
Communication target object recognition for D2D connection with feature size limit
NASA Astrophysics Data System (ADS)
Ok, Jiheon; Kim, Soochang; Kim, Young-hoon; Lee, Chulhee
2015-03-01
Recently, a new concept of device-to-device (D2D) communication, which is called "point-and-link communication" has attracted great attentions due to its intuitive and simple operation. This approach enables user to communicate with target devices without any pre-identification information such as SSIDs, MAC addresses by selecting the target image displayed on the user's own device. In this paper, we present an efficient object matching algorithm that can be applied to look(point)-and-link communications for mobile services. Due to the limited channel bandwidth and low computational power of mobile terminals, the matching algorithm should satisfy low-complexity, low-memory and realtime requirements. To meet these requirements, we propose fast and robust feature extraction by considering the descriptor size and processing time. The proposed algorithm utilizes a HSV color histogram, SIFT (Scale Invariant Feature Transform) features and object aspect ratios. To reduce the descriptor size under 300 bytes, a limited number of SIFT key points were chosen as feature points and histograms were binarized while maintaining required performance. Experimental results show the robustness and the efficiency of the proposed algorithm.
An Introduction to Document Imaging in the Financial Aid Office.
ERIC Educational Resources Information Center
Levy, Douglas A.
2001-01-01
First describes the components of a document imaging system in general and then addresses this technology specifically in relation to financial aid document management: its uses and benefits, considerations in choosing a document imaging system, and additional sources for information. (EV)
Neural Network of Body Representation Differs between Transsexuals and Cissexuals
Lin, Chia-Shu; Ku, Hsiao-Lun; Chao, Hsiang-Tai; Tu, Pei-Chi; Li, Cheng-Ta; Cheng, Chou-Ming; Su, Tung-Ping; Lee, Ying-Chiao; Hsieh, Jen-Chuen
2014-01-01
Body image is the internal representation of an individual’s own physical appearance. Individuals with gender identity disorder (GID), commonly referred to as transsexuals (TXs), are unable to form a satisfactory body image due to the dissonance between their biological sex and gender identity. We reasoned that changes in the resting-state functional connectivity (rsFC) network would neurologically reflect such experiential incongruence in TXs. Using graph theory-based network analysis, we investigated the regional changes of the degree centrality of the rsFC network. The degree centrality is an index of the functional importance of a node in a neural network. We hypothesized that three key regions of the body representation network, i.e., the primary somatosensory cortex, the superior parietal lobule and the insula, would show a higher degree centrality in TXs. Twenty-three pre-treatment TXs (11 male-to-female and 12 female-to-male TXs) as one psychosocial group and 23 age-matched healthy cissexual control subjects (CISs, 11 males and 12 females) were recruited. Resting-state functional magnetic resonance imaging was performed, and binarized rsFC networks were constructed. The TXs demonstrated a significantly higher degree centrality in the bilateral superior parietal lobule and the primary somatosensory cortex. In addition, the connectivity between the right insula and the bilateral primary somatosensory cortices was negatively correlated with the selfness rating of their desired genders. These data indicate that the key components of body representation manifest in TXs as critical function hubs in the rsFC network. The negative association may imply a coping mechanism that dissociates bodily emotion from body image. The changes in the functional connectome may serve as representational markers for the dysphoric bodily self of TXs. PMID:24465785
Montesano, Giovanni; Allegrini, Davide; Colombo, Leonardo; Rossetti, Luca M; Pece, Alfredo
2017-01-01
The main objective of our work is to perform an in depth analysis of the structural features of normal choriocapillaris imaged with OCT Angiography. Specifically, we provide an optimal radius for a circular Region of Interest (ROI) to obtain a stable estimate of the subfoveal choriocapillaris density and characterize its textural properties using Markov Random Fields. On each binarized image of the choriocapillaris OCT Angiography we performed simulated measurements of the subfoveal choriocapillaris densities with circular Regions of Interest (ROIs) of different radii and with small random displacements from the center of the Foveal Avascular Zone (FAZ). We then calculated the variability of the density measure with different ROI radii. We then characterized the textural features of choriocapillaris binary images by estimating the parameters of an Ising model. For each image we calculated the Optimal Radius (OR) as the minimum ROI radius required to obtain a standard deviation in the simulation below 0.01. The density measured with the individual OR was 0.52 ± 0.07 (mean ± STD). Similar density values (0.51 ± 0.07) were obtained using a fixed ROI radius of 450 μm. The Ising model yielded two parameter estimates (β = 0.34 ± 0.03; γ = 0.003 ± 0.012; mean ± STD), characterizing pixel clustering and white pixel density respectively. Using the estimated parameters to synthetize new random textures via simulation we obtained a good reproduction of the original choriocapillaris structural features and density. In conclusion, we developed an extensive characterization of the normal subfoveal choriocapillaris that might be used for flow analysis and applied to the investigation pathological alterations.
Document cards: a top trumps visualization for documents.
Strobelt, Hendrik; Oelke, Daniela; Rohrdantz, Christian; Stoffel, Andreas; Keim, Daniel A; Deussen, Oliver
2009-01-01
Finding suitable, less space consuming views for a document's main content is crucial to provide convenient access to large document collections on display devices of different size. We present a novel compact visualization which represents the document's key semantic as a mixture of images and important key terms, similar to cards in a top trumps game. The key terms are extracted using an advanced text mining approach based on a fully automatic document structure extraction. The images and their captions are extracted using a graphical heuristic and the captions are used for a semi-semantic image weighting. Furthermore, we use the image color histogram for classification and show at least one representative from each non-empty image class. The approach is demonstrated for the IEEE InfoVis publications of a complete year. The method can easily be applied to other publication collections and sets of documents which contain images.
Detection of cracks on concrete surfaces by hyperspectral image processing
NASA Astrophysics Data System (ADS)
Santos, Bruno O.; Valença, Jonatas; Júlio, Eduardo
2017-06-01
All large infrastructures worldwide must have a suitable monitoring and maintenance plan, aiming to evaluate their behaviour and predict timely interventions. In the particular case of concrete infrastructures, the detection and characterization of crack patterns is a major indicator of their structural response. In this scope, methods based on image processing have been applied and presented. Usually, methods focus on image binarization followed by applications of mathematical morphology to identify cracks on concrete surface. In most cases, publications are focused on restricted areas of concrete surfaces and in a single crack. On-site, the methods and algorithms have to deal with several factors that interfere with the results, namely dirt and biological colonization. Thus, the automation of a procedure for on-site characterization of crack patterns is of great interest. This advance may result in an effective tool to support maintenance strategies and interventions planning. This paper presents a research based on the analysis and processing of hyper-spectral images for detection and classification of cracks on concrete structures. The objective of the study is to evaluate the applicability of several wavelengths of the electromagnetic spectrum for classification of cracks in concrete surfaces. An image survey considering highly discretized wavelengths between 425 nm and 950 nm was performed on concrete specimens, with bandwidths of 25 nm. The concrete specimens were produced with a crack pattern induced by applying a load with displacement control. The tests were conducted to simulate usual on-site drawbacks. In this context, the surface of the specimen was subjected to biological colonization (leaves and moss). To evaluate the results and enhance crack patterns a clustering method, namely k-means algorithm, is being applied. The research conducted allows to define the suitability of using clustering k-means algorithm combined with hyper-spectral images highly discretized for crack detection on concrete surfaces, considering cracking combined with the most usual concrete anomalies, namely biological colonization.
Chintapalli, Mahati; Higa, Kenneth; Chen, X. Chelsea; ...
2016-12-19
A method is presented in this paper to relate local morphology and ionic conductivity in a solid, lamellar block copolymer electrolyte for lithium batteries, by simulating conductivity through transmission electron micrographs. The electrolyte consists of polystyrene-block-poly(ethylene oxide) mixed with lithium bis(trifluoromethanesulfonyl) imide salt (SEO/LiTFSI), where the polystyrene phase is structural phase and the poly(ethylene oxide)/LiTFSI phase is ionically conductive. The electric potential distribution is simulated in binarized micrographs by solving the Laplace equation with constant potential boundary conditions. A morphology factor, f, is reported for each image by calculating the effective conductivity relative to a homogenous conductor. Images from twomore » samples are examined, one annealed with large lamellar grains and one unannealed with small grains. The average value off is 0.45 ± 0.04 for the annealed sample, and 0.37 ± 0.03 for the unannealed sample, both close to the value predicted by effective medium theory, 1/2. Simulated conductivities are compared to published experimental conductivities. The value of f Unannealed/f Annealed is 0.82 for simulations and 6.2 for experiments. Simulation results correspond well to predictions by effective medium theory but do not explain the experimental measurements. Finally, observation of nanoscale morphology over length scales greater than the size of the micrographs (~1 μm) may be required to explain the experimental results.« less
NASA Astrophysics Data System (ADS)
Townsley, Leisa K.; Broos, Patrick S.; Feigelson, Eric D.; Garmire, Gordon P.; Getman, Konstantin V.
2006-04-01
We have studied the X-ray point-source population of the 30 Doradus (30 Dor) star-forming complex in the Large Magellanic Cloud using high spatial resolution X-ray images and spatially resolved spectra obtained with the Advanced CCD Imaging Spectrometer (ACIS) on board the Chandra X-Ray Observatory. Here we describe the X-ray sources in a 17'×17' field centered on R136, the massive star cluster at the center of the main 30 Dor nebula. We detect 20 of the 32 Wolf-Rayet stars in the ACIS field. The cluster R136 is resolved at the subarcsecond level into almost 100 X-ray sources, including many typical O3-O5 stars, as well as a few bright X-ray sources previously reported. Over 2 orders of magnitude of scatter in LX is seen among R136 O stars, suggesting that X-ray emission in the most massive stars depends critically on the details of wind properties and the binarity of each system, rather than reflecting the widely reported characteristic value LX/Lbol~=10-7. Such a canonical ratio may exist for single massive stars in R136, but our data are too shallow to confirm this relationship. Through this and future X-ray studies of 30 Dor, the complete life cycle of a massive stellar cluster can be revealed.
An efficient indexing scheme for binary feature based biometric database
NASA Astrophysics Data System (ADS)
Gupta, P.; Sana, A.; Mehrotra, H.; Hwang, C. Jinshong
2007-04-01
The paper proposes an efficient indexing scheme for binary feature template using B+ tree. In this scheme the input image is decomposed into approximation, vertical, horizontal and diagonal coefficients using the discrete wavelet transform. The binarized approximation coefficient at second level is divided into four quadrants of equal size and Hamming distance (HD) for each quadrant with respect to sample template of all ones is measured. This HD value of each quadrant is used to generate upper and lower range values which are inserted into B+ tree. The nodes of tree at first level contain the lower and upper range values generated from HD of first quadrant. Similarly, lower and upper range values for the three quadrants are stored in the second, third and fourth level respectively. Finally leaf node contains the set of identifiers. At the time of identification, the test image is used to generate HD for four quadrants. Then the B+ tree is traversed based on the value of HD at every node and terminates to leaf nodes with set of identifiers. The feature vector for each identifier is retrieved from the particular bin of secondary memory and matched with test feature template to get top matches. The proposed scheme is implemented on ear biometric database collected at IIT Kanpur. The system is giving an overall accuracy of 95.8% at penetration rate of 34%.
Romano, Francesco; Arrigo, Alessandro; Chʼng, Soon Wai; Battaglia Parodi, Maurizio; Manitto, Maria Pia; Martina, Elisabetta; Bandello, Francesco; Stanga, Paulo E
2018-06-05
To assess foveal and parafoveal vasculature at the superficial capillary plexus, deep capillary plexus, and choriocapillaris of patients with X-linked retinoschisis by means of optical coherence tomography angiography. Six patients with X-linked retinoschisis (12 eyes) and seven healthy controls (14 eyes) were recruited and underwent complete ophthalmologic examination, including best-corrected visual acuity, dilated fundoscopy, and 3 × 3-mm optical coherence tomography angiography macular scans (DRI OCT Triton; Topcon Corp). After segmentation and quality review, optical coherence tomography angiography slabs were imported into ImageJ 1.50 (NIH; Bethesda) and digitally binarized. Quantification of vessel density was performed after foveal avascular zone area measurement and exclusion. Patients were additionally divided into "responders" and "nonresponders" to dorzolamide therapy. Foveal avascular zone area resulted markedly enlarged at the deep capillary plexus (P < 0.001), particularly in nonresponders. Moreover, patients disclosed a significant deep capillary plexus rarefaction, when compared with controls (P: 0.04); however, a subanalysis revealed that this damage was limited to the fovea (P: 0.006). Finally, the enlargement of foveal avascular zone area positively correlated with a decline in best-corrected visual acuity (P: 0.01). Prominent foveal vascular impairment is detectable in the deep capillary plexus of patients with X-linked retinoschisis. Our results correlate with functional outcomes, suggesting a possible vascular role in X-linked retinoschisis clinical manifestations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chintapalli, Mahati; Higa, Kenneth; Chen, X. Chelsea
A method is presented in this paper to relate local morphology and ionic conductivity in a solid, lamellar block copolymer electrolyte for lithium batteries, by simulating conductivity through transmission electron micrographs. The electrolyte consists of polystyrene-block-poly(ethylene oxide) mixed with lithium bis(trifluoromethanesulfonyl) imide salt (SEO/LiTFSI), where the polystyrene phase is structural phase and the poly(ethylene oxide)/LiTFSI phase is ionically conductive. The electric potential distribution is simulated in binarized micrographs by solving the Laplace equation with constant potential boundary conditions. A morphology factor, f, is reported for each image by calculating the effective conductivity relative to a homogenous conductor. Images from twomore » samples are examined, one annealed with large lamellar grains and one unannealed with small grains. The average value off is 0.45 ± 0.04 for the annealed sample, and 0.37 ± 0.03 for the unannealed sample, both close to the value predicted by effective medium theory, 1/2. Simulated conductivities are compared to published experimental conductivities. The value of f Unannealed/f Annealed is 0.82 for simulations and 6.2 for experiments. Simulation results correspond well to predictions by effective medium theory but do not explain the experimental measurements. Finally, observation of nanoscale morphology over length scales greater than the size of the micrographs (~1 μm) may be required to explain the experimental results.« less
An Astrometric Analysis of eta Carinae’s Eruptive History Using HST WF/PC2 and ACS Observations
2007-07-11
Std Z39-18 to address the question of binarity. Based on an astrometric analysis of the data, binary reflex motion is detected in the primary and, by...Measurement Results 96 5.1 Primary Luminosity and Mass . . . . . . . . . . . . . . . . . . . . . . 96 5.2 Secondary Mass and Luminosity...Binary Models . . . . . . . . . . . 100 5.5 Primary –Secondary Distance . . . . . . . . . . . . . . . . . . . . . . . 102 5.6 Periastron passage
The planetary nebula IC 4776 and its post-common-envelope binary central star
NASA Astrophysics Data System (ADS)
Sowicka, Paulina; Jones, David; Corradi, Romano L. M.; Wesson, Roger; García-Rojas, Jorge; Santander-García, Miguel; Boffin, Henri M. J.; Rodríguez-Gil, Pablo
2017-11-01
We present a detailed analysis of IC 4776, a planetary nebula displaying a morphology believed to be typical of central star binarity. The nebula is shown to comprise a compact hourglass-shaped central region and a pair of precessing jet-like structures. Time-resolved spectroscopy of its central star reveals a periodic radial velocity variability consistent with a binary system. Whilst the data are insufficient to accurately determine the parameters of the binary, the most likely solutions indicate that the secondary is probably a low-mass main-sequence star. An empirical analysis of the chemical abundances in IC 4776 indicates that the common-envelope phase may have cut short the asymptotic giant branch evolution of the progenitor. Abundances calculated from recombination lines are found to be discrepant by a factor of approximately 2 relative to those calculated using collisionally excited lines, suggesting a possible correlation between low-abundance discrepancy factors and intermediate-period post-common-envelope central stars and/or Wolf-Rayet central stars. The detection of a radial velocity variability associated with the binarity of the central star of IC 4776 may be indicative of a significant population of (intermediate-period) post-common-envelope binary central stars that would be undetected by classic photometric monitoring techniques.
A One-Versus-All Class Binarization Strategy for Bearing Diagnostics of Concurrent Defects
Ng, Selina S. Y.; Tse, Peter W.; Tsui, Kwok L.
2014-01-01
In bearing diagnostics using a data-driven modeling approach, a concern is the need for data from all possible scenarios to build a practical model for all operating conditions. This paper is a study on bearing diagnostics with the concurrent occurrence of multiple defect types. The authors are not aware of any work in the literature that studies this practical problem. A strategy based on one-versus-all (OVA) class binarization is proposed to improve fault diagnostics accuracy while reducing the number of scenarios for data collection, by predicting concurrent defects from training data of normal and single defects. The proposed OVA diagnostic approach is evaluated with empirical analysis using support vector machine (SVM) and C4.5 decision tree, two popular classification algorithms frequently applied to system health diagnostics and prognostics. Statistical features are extracted from the time domain and the frequency domain. Prediction performance of the proposed strategy is compared with that of a simple multi-class classification, as well as that of random guess and worst-case classification. We have verified the potential of the proposed OVA diagnostic strategy in performance improvements for single-defect diagnosis and predictions of BPFO plus BPFI concurrent defects using two laboratory-collected vibration data sets. PMID:24419162
A one-versus-all class binarization strategy for bearing diagnostics of concurrent defects.
Ng, Selina S Y; Tse, Peter W; Tsui, Kwok L
2014-01-13
In bearing diagnostics using a data-driven modeling approach, a concern is the need for data from all possible scenarios to build a practical model for all operating conditions. This paper is a study on bearing diagnostics with the concurrent occurrence of multiple defect types. The authors are not aware of any work in the literature that studies this practical problem. A strategy based on one-versus-all (OVA) class binarization is proposed to improve fault diagnostics accuracy while reducing the number of scenarios for data collection, by predicting concurrent defects from training data of normal and single defects. The proposed OVA diagnostic approach is evaluated with empirical analysis using support vector machine (SVM) and C4.5 decision tree, two popular classification algorithms frequently applied to system health diagnostics and prognostics. Statistical features are extracted from the time domain and the frequency domain. Prediction performance of the proposed strategy is compared with that of a simple multi-class classification, as well as that of random guess and worst-case classification. We have verified the potential of the proposed OVA diagnostic strategy in performance improvements for single-defect diagnosis and predictions of BPFO plus BPFI concurrent defects using two laboratory-collected vibration data sets.
Digital document imaging systems: An overview and guide
NASA Technical Reports Server (NTRS)
1990-01-01
This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs.
Adaptive Algorithms for Automated Processing of Document Images
2011-01-01
ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University
Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu Wu; Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8; Yuchi Ming
Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped;more » the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions: The proposed needle segmentation algorithm is accurate, robust, and suitable for 3D TRUS guided prostate transperineal therapy.« less
Geometric rectification of camera-captured document images.
Liang, Jian; DeMenthon, Daniel; Doermann, David
2008-04-01
Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.
Uji, Akihito; Balasubramanian, Siva; Lei, Jianqin; Baghdasaryan, Elmira; Al-Sheikh, Mayss; Sadda, SriniVas R
2017-11-01
Imaging of the choriocapillaris in vivo is challenging with existing technology. Optical coherence tomography angiography (OCTA), if optimized, could make the imaging less challenging. To investigate multiple en face image averaging on OCTA images of the choriocapillaris. Observational, cross-sectional case series at a referral institutional practice in Los Angeles, California. From the original cohort of 21 healthy individuals, 17 normal eyes of 17 participants were included in the study. The study dates were August to September 2016. All participants underwent OCTA imaging of the macula covering a 3 × 3-mm area using OCTA software (Cirrus 5000 with AngioPlex; Carl Zeiss Meditec). One eye per participant was repeatedly imaged to obtain 9 OCTA cube scan sets. Registration was first performed using superficial capillary plexus images, and this transformation was then applied to the choriocapillaris images. The 9 registered choriocapillaris images were then averaged. Quantitative parameters were measured on binarized OCTA images and compared with the unaveraged OCTA images. Vessel caliber measurement. Seventeen eyes of 17 participants (mean [SD] age, 35.1 [6.0] years; 9 [53%] female; and 9 [53%] of white race/ethnicity) with sufficient image quality were included in this analysis. The single unaveraged images demonstrated a granular appearance, and the vascular pattern was difficult to discern. After averaging, en face choriocapillaris images showed a meshwork appearance. The mean (SD) diameter of the vessels was 22.8 (5.8) µm (range, 9.6-40.2 µm). Compared with the single unaveraged images, the averaged images showed more flow voids (1423 flow voids [95% CI, 967-1909] vs 1254 flow voids [95% CI, 825-1683], P < .001), smaller average size of the flow voids (911 [95% CI, 301-1521] µm2 vs 1364 [95% CI, 645-2083] µm2, P < .001), and greater vessel density (70.7% [95% CI, 61.9%-79.5%] vs 61.9% [95% CI, 56.0%-67.8%], P < .001). The distribution of the number vs sizes of the flow voids was skewed in both unaveraged and averaged images. A linear log-log plot of the distribution showed a more homogeneous distribution in the averaged images compared with the unaveraged images. Multiple en face averaging can improve visualization of the choriocapillaris on OCTA images, transforming the images from a granular appearance to a level where the intervascular spaces can be resolved in healthy volunteers.
ERIC Educational Resources Information Center
Hendley, Tom
1995-01-01
Discussion of digital document image processing focuses on issues and options associated with greyscale and color image processing. Topics include speed; size of original document; scanning resolution; markets for different categories of scanners, including photographic libraries, publishing, and office applications; hybrid systems; data…
Effect of the image resolution on the statistical descriptors of heterogeneous media.
Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime
2018-02-01
The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
Effect of the image resolution on the statistical descriptors of heterogeneous media
NASA Astrophysics Data System (ADS)
Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime
2018-02-01
The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
ERIC Educational Resources Information Center
Ding, Daniel D.
2000-01-01
Presents historical roots of page design principles, arguing that current theories and practices of document design have their roots in gender-related theories of images. Claims visual design should be evaluated regarding the rhetorical situation in which the design is used. Focuses on visual images of documents in professional communication,…
3D Printing of Preoperative Simulation Models of a Splenic Artery Aneurysm: Precision and Accuracy.
Takao, Hidemasa; Amemiya, Shiori; Shibata, Eisuke; Ohtomo, Kuni
2017-05-01
Three-dimensional (3D) printing is attracting increasing attention in the medical field. This study aimed to apply 3D printing to the production of hollow splenic artery aneurysm models for use in the simulation of endovascular treatment, and to evaluate the precision and accuracy of the simulation model. From 3D computed tomography (CT) angiography data of a splenic artery aneurysm, 10 hollow models reproducing the vascular lumen were created using a fused deposition modeling-type desktop 3D printer. After filling with water, each model was scanned using T2-weighted magnetic resonance imaging for the evaluation of the lumen. All images were coregistered, binarized, and then combined to create an overlap map. The cross-sectional area of the splenic artery aneurysm and its standard deviation (SD) were calculated perpendicular to the x- and y-axes. Most voxels overlapped among the models. The cross-sectional areas were similar among the models, with SDs <0.05 cm 2 . The mean cross-sectional areas of the splenic artery aneurysm were slightly smaller than those calculated from the original mask images. The maximum mean cross-sectional areas calculated perpendicular to the x- and y-axes were 3.90 cm 2 (SD, 0.02) and 4.33 cm 2 (SD, 0.02), whereas those calculated from the original mask images were 4.14 cm 2 and 4.66 cm 2 , respectively. The mean cross-sectional areas of the afferent artery were, however, almost the same as those calculated from the original mask images. The results suggest that 3D simulation modeling of a visceral artery aneurysm using a fused deposition modeling-type desktop 3D printer and computed tomography angiography data is highly precise and accurate. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Kihara, Takanori; Kashitani, Kosuke; Miyake, Jun
2017-07-14
Cell proliferation is a key characteristic of eukaryotic cells. During cell proliferation, cells interact with each other. In this study, we developed a cellular automata model to estimate cell-cell interactions using experimentally obtained images of cultured cells. We used four types of cells; HeLa cells, human osteosarcoma (HOS) cells, rat mesenchymal stem cells (MSCs), and rat smooth muscle A7r5 cells. These cells were cultured and stained daily. The obtained cell images were binarized and clipped into squares containing about 10 4 cells. These cells showed characteristic cell proliferation patterns. The growth curves of these cells were generated from the cell proliferation images and we determined the doubling time of these cells from the growth curves. We developed a simple cellular automata system with an easily accessible graphical user interface. This system has five variable parameters, namely, initial cell number, doubling time, motility, cell-cell adhesion, and cell-cell contact inhibition (of proliferation). Within these parameters, we obtained initial cell numbers and doubling times experimentally. We set the motility at a constant value because the effect of the parameter for our simulation was restricted. Therefore, we simulated cell proliferation behavior with cell-cell adhesion and cell-cell contact inhibition as variables. By comparing growth curves and proliferation cell images, we succeeded in determining the cell-cell interaction properties of each cell. Simulated HeLa and HOS cells exhibited low cell-cell adhesion and weak cell-cell contact inhibition. Simulated MSCs exhibited high cell-cell adhesion and positive cell-cell contact inhibition. Simulated A7r5 cells exhibited low cell-cell adhesion and strong cell-cell contact inhibition. These simulated results correlated with the experimental growth curves and proliferation images. Our simulation approach is an easy method for evaluating the cell-cell interaction properties of cells.
NASA Astrophysics Data System (ADS)
Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.
2016-03-01
Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.
Goal-oriented rectification of camera-based document images.
Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J
2011-04-01
Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.
Web-based document image processing
NASA Astrophysics Data System (ADS)
Walker, Frank L.; Thoma, George R.
1999-12-01
Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.
Globular cluster chemistry in fast-rotating dwarf stars belonging to intermediate-age open clusters
NASA Astrophysics Data System (ADS)
Pancino, Elena
2018-06-01
The peculiar chemistry observed in multiple populations of Galactic globular clusters is not generally found in other systems such as dwarf galaxies and open clusters, and no model can currently fully explain it. Exploring the boundaries of the multiple-population phenomenon and the variation of its extent in the space of cluster mass, age, metallicity, and compactness has proven to be a fruitful line of investigation. In the framework of a larger project to search for multiple populations in open clusters that is based on literature and survey data, I found peculiar chemical abundance patterns in a sample of intermediate-age open clusters with publicly available data. More specifically, fast-rotating dwarf stars (v sin i ≥ 50 km s-1) that belong to four clusters (Pleiades, Ursa Major, Come Berenices, and Hyades) display a bimodality in either [Na/Fe] or [O/Fe], or both, with the low-Na and high-O peak more populated than the high-Na and low-O peak. Additionally, two clusters show a Na-O anti-correlation in the fast-rotating stars, and one cluster shows a large [Mg/Fe] variation in stars with high [Na/Fe], reaching the extreme Mg depletion observed in NGC 2808. Even considering that the sample sizes are small, these patterns call for attention in the light of a possible connection with the multiple population phenomenon of globular clusters. The specific chemistry observed in these fast-rotating dwarf stars is thought to be produced by a complex interplay of different diffusion and mixing mechanisms, such as rotational mixing and mass loss, which in turn are influenced by metallicity, binarity, mass, age, variability, and so on. However, with the sample in hand, it was not possible to identify which stellar parameters cause the observed Na and O bimodality and Na-O anti-correlation. This suggests that other stellar properties might be important in addition to stellar rotation. Stellar binarity might influence the rotational properties and enhance rotational mixing and mass loss of stars in a dense environment like that of clusters (especially globulars). In conclusion, rotation and binarity appear as a promising research avenue for better understanding multiple stellar populations in globular clusters; this is certainly worth exploring further.
Composition of a dewarped and enhanced document image from two view images.
Koo, Hyung Il; Kim, Jinho; Cho, Nam Ik
2009-07-01
In this paper, we propose an algorithm to compose a geometrically dewarped and visually enhanced image from two document images taken by a digital camera at different angles. Unlike the conventional works that require special equipment or assumptions on the contents of books or complicated image acquisition steps, we estimate the unfolded book or document surface from the corresponding points between two images. For this purpose, the surface and camera matrices are estimated using structure reconstruction, 3-D projection analysis, and random sample consensus-based curve fitting with the cylindrical surface model. Because we do not need any assumption on the contents of books, the proposed method can be applied not only to optical character recognition (OCR), but also to the high-quality digitization of pictures in documents. In addition to the dewarping for a structurally better image, image mosaic is also performed for further improving the visual quality. By finding better parts of images (with less out of focus blur and/or without specular reflections) from either of views, we compose a better image by stitching and blending them. These processes are formulated as energy minimization problems that can be solved using a graph cut method. Experiments on many kinds of book or document images show that the proposed algorithm robustly works and yields visually pleasing results. Also, the OCR rate of the resulting image is comparable to that of document images from a flatbed scanner.
Ernst, E J; Speck, Patricia M; Fitzpatrick, Joyce J
2011-12-01
With the patient's consent, physical injuries sustained in a sexual assault are evaluated and treated by the sexual assault nurse examiner (SANE) and documented on preprinted traumagrams and with photographs. Digital imaging is now available to the SANE for documentation of sexual assault injuries, but studies of the image quality of forensic digital imaging of female genital injuries after sexual assault were not found in the literature. The Photo Documentation Image Quality Scoring System (PDIQSS) was developed to rate the image quality of digital photo documentation of female genital injuries after sexual assault. Three expert observers performed evaluations on 30 separate images at two points in time. An image quality score, the sum of eight integral technical and anatomical attributes on the PDIQSS, was obtained for each image. Individual image quality ratings, defined by rating image quality for each of the data, were also determined. The results demonstrated a high level of image quality and agreement when measured in all dimensions. For the SANE in clinical practice, the results of this study indicate that a high degree of agreement exists between expert observers when using the PDIQSS to rate image quality of individual digital photographs of female genital injuries after sexual assault. © 2011 International Association of Forensic Nurses.
Duplicate document detection in DocBrowse
NASA Astrophysics Data System (ADS)
Chalana, Vikram; Bruce, Andrew G.; Nguyen, Thien
1998-04-01
Duplicate documents are frequently found in large databases of digital documents, such as those found in digital libraries or in the government declassification effort. Efficient duplicate document detection is important not only to allow querying for similar documents, but also to filter out redundant information in large document databases. We have designed three different algorithm to identify duplicate documents. The first algorithm is based on features extracted from the textual content of a document, the second algorithm is based on wavelet features extracted from the document image itself, and the third algorithm is a combination of the first two. These algorithms are integrated within the DocBrowse system for information retrieval from document images which is currently under development at MathSoft. DocBrowse supports duplicate document detection by allowing (1) automatic filtering to hide duplicate documents, and (2) ad hoc querying for similar or duplicate documents. We have tested the duplicate document detection algorithms on 171 documents and found that text-based method has an average 11-point precision of 97.7 percent while the image-based method has an average 11- point precision of 98.9 percent. However, in general, the text-based method performs better when the document contains enough high-quality machine printed text while the image- based method performs better when the document contains little or no quality machine readable text.
Imaging the Elusive H-poor Gas in the High adf Planetary Nebula NGC 6778
NASA Astrophysics Data System (ADS)
García-Rojas, Jorge; Corradi, Romano L. M.; Monteiro, Hektor; Jones, David; Rodríguez-Gil, Pablo; Cabrera-Lavers, Antonio
2016-06-01
We present the first direct image of the high-metallicity gas component in a planetary nebula (NGC 6778), taken with the OSIRIS Blue Tunable Filter centered on the O II λ4649+50 Å optical recombination lines (ORLs) at the 10.4 m Gran Telescopio Canarias. We show that the emission of these faint O II ORLs is concentrated in the central parts of the planetary nebula and is not spatially coincident either with emission coming from the bright [O III] λ5007 Å collisionally excited line (CEL) or the bright Hα recombination line. From monochromatic emission line maps taken with VIMOS at the 8.2 m Very Large Telescope, we find that the spatial distribution of the emission from the auroral [O III] λ4363 line resembles that of the O II ORLs but differs from nebular [O III] λ5007 CEL distribution, implying a temperature gradient inside the planetary nebula. The centrally peaked distribution of the O II emission and the differences with the [O III] and H I emission profiles are consistent with the presence of an H-poor gas whose origin may be linked to the binarity of the central star. However, determination of the spatial distribution of the ORLs and CELs in other PNe and a comparison of their dynamics are needed to further constrain the geometry and ejection mechanism of the metal-rich (H-poor) component and hence, understand the origin of the abundance discrepancy problem in PNe.
Debris Discs: Modeling/theory review
NASA Astrophysics Data System (ADS)
Thébault, P.
2012-03-01
An impressive amount of photometric, spectroscopic and imaging observations of circumstellar debris discs has been accumulated over the past 3 decades, revealing that they come in all shapes and flavours, from young post-planet-formation systems like Beta-Pic to much older ones like Vega. What we see in these systems are small grains, which are probably only the tip of the iceberg of a vast population of larger (undetectable) collisionally-eroding bodies, leftover from the planet-formation process. Understanding the spatial structure, physical properties, origin and evolution of this dust is of crucial importance, as it is our only window into what is going on in these systems. Dust can be used as a tracer of the distribution of their collisional progenitors and of possible hidden massive pertubers, but can also allow to derive valuable information about the disc's total mass, size distribution or chemical composition. I will review the state of the art in numerical models of debris disc, and present some important issues that are explored by current modelling efforts: planet-disc interactions, link between cold (i.e. Herschel-observed) and hot discs, effect of binarity, transient versus continuous processes, etc. I will finally present some possible perspectives for the development of future models.
How binarity affect the abundance discrepancy in planetary nebulae
NASA Astrophysics Data System (ADS)
García-Rojas, J.; Monteiro, H.; Jones, D.; Boffin, H.; Wesson, R.; Corradi, R.; Rodríguez-Gil, P.
2017-11-01
The discrepancy between chemical abundances computed using optical recombination lines (ORLs) and collisionally excited lines (CELs) is a major unresolved problem in nebular astrophysics, with significant implications for the determination of chemical abundances throughout the Universe. In planetary nebulae (PNe), a common explanation of this discrepancy is that two different gas phases coexist: a hot component with standard metallicity, and a much cooler plasma with a highly enhanced content of heavy elements. This dual nature is not predicted by mass loss theories, and observational support for it is still weak. We present recent findings which show that the largest abundance discrepancies (ADs) are reached in PNe with close binary central stars. Our last long-slit spectroscopic studies as well as direct imaging of the gas in the faint O II ORLs and high spatial resolution IFU spectroscopy support the fact that probably two different gas phases coexist in these nebulae and that high ADs should be explained in a framework of binary evolution. Although the exact scenario is still not understood, a promising proposal is that nova-like ejecta have a crucial role in the strong ORL emission in these objects.
NASA Astrophysics Data System (ADS)
Polverino, Arianna; Longo, Angela; Donizetti, Aldo; Drongitis, Denise; Frucci, Maria; Schiavo, Loredana; Carotenuto, Gianfranco; Nicolais, Luigi; Piscopo, Marina; Vitale, Emilia; Fucci, Laura
2014-07-01
While nanomedicine has an enormous potential to improve the precision of specific therapy, the ability to efficiently deliver these materials to regions of disease in vivo remains limited. In this study, we describe analyses of (AuNPs)-mmi cellular intake via fluorescence microscopy and its effects on H3K4 and H3K9 histone dimethylation. Specifically, we studied the level of H3K4 dimethylation in serving the role of an epigenetic marker of euchromatin, and of H3K9 dimethylation as a marker of transcriptional repression in four different cell lines. We analyzed histone di-methyl-H3K4 and di-methyl-H3K9 using either variable concentrations of nanoparticles or variable time points after cellular uptake. The observed methylation effects decreased consistently with decreasing (AuNPs)-mmi concentrations. Fluorescent microscopy and a binarization algorithm based on a thresholding process with RGB input images demonstrated the continued presence of (AuNPs)-mmi in cells at the lowest concentration used. Furthermore, our results show that the treated cell line used is able to rescue the untreated cell phenotype.
Detection of Aspens Using High Resolution Aerial Laser Scanning Data and Digital Aerial Images
Säynäjoki, Raita; Packalén, Petteri; Maltamo, Matti; Vehmas, Mikko; Eerikäinen, Kalle
2008-01-01
The aim was to use high resolution Aerial Laser Scanning (ALS) data and aerial images to detect European aspen (Populus tremula L.) from among other deciduous trees. The field data consisted of 14 sample plots of 30 m × 30 m size located in the Koli National Park in the North Karelia, Eastern Finland. A Canopy Height Model (CHM) was interpolated from the ALS data with a pulse density of 3.86/m2, low-pass filtered using Height-Based Filtering (HBF) and binarized to create the mask needed to separate the ground pixels from the canopy pixels within individual areas. Watershed segmentation was applied to the low-pass filtered CHM in order to create preliminary canopy segments, from which the non-canopy elements were extracted to obtain the final canopy segmentation, i.e. the ground mask was analysed against the canopy mask. A manual classification of aerial images was employed to separate the canopy segments of deciduous trees from those of coniferous trees. Finally, linear discriminant analysis was applied to the correctly classified canopy segments of deciduous trees to classify them into segments belonging to aspen and those belonging to other deciduous trees. The independent variables used in the classification were obtained from the first pulse ALS point data. The accuracy of discrimination between aspen and other deciduous trees was 78.6%. The independent variables in the classification function were the proportion of vegetation hits, the standard deviation of in pulse heights, accumulated intensity at the 90th percentile and the proportion of laser points reflected at the 60th height percentile. The accuracy of classification corresponded to the validation results of earlier ALS-based studies on the classification of individual deciduous trees to tree species. PMID:27873799
Contrast in Terahertz Images of Archival Documents—Part II: Influence of Topographic Features
NASA Astrophysics Data System (ADS)
Bardon, Tiphaine; May, Robert K.; Taday, Philip F.; Strlič, Matija
2017-04-01
We investigate the potential of terahertz time-domain imaging in reflection mode to reveal archival information in documents in a non-invasive way. In particular, this study explores the parameters and signal processing tools that can be used to produce well-contrasted terahertz images of topographic features commonly found in archival documents, such as indentations left by a writing tool, as well as sieve lines. While the amplitude of the waveforms at a specific time delay can provide the most contrasted and legible images of topographic features on flat paper or parchment sheets, this parameter may not be suitable for documents that have a highly irregular surface, such as water- or fire-damaged documents. For analysis of such documents, cross-correlation of the time-domain signals can instead yield images with good contrast. Analysis of the frequency-domain representation of terahertz waveforms can also provide well-contrasted images of topographic features, with improved spatial resolution when utilising high-frequency content. Finally, we point out some of the limitations of these means of analysis for extracting information relating to topographic features of interest from documents.
Classification of document page images based on visual similarity of layout structures
NASA Astrophysics Data System (ADS)
Shin, Christian K.; Doermann, David S.
1999-12-01
Searching for documents by their type or genre is a natural way to enhance the effectiveness of document retrieval. The layout of a document contains a significant amount of information that can be used to classify a document's type in the absence of domain specific models. A document type or genre can be defined by the user based primarily on layout structure. Our classification approach is based on 'visual similarity' of the layout structure by building a supervised classifier, given examples of the class. We use image features, such as the percentages of tex and non-text (graphics, image, table, and ruling) content regions, column structures, variations in the point size of fonts, the density of content area, and various statistics on features of connected components which can be derived from class samples without class knowledge. In order to obtain class labels for training samples, we conducted a user relevance test where subjects ranked UW-I document images with respect to the 12 representative images. We implemented our classification scheme using the OC1, a decision tree classifier, and report our findings.
Acoustooptic Processing of Two Dimensional Signals Using Temporal and Spatial Integration
1989-05-12
AND SPATIAL INTEGRATION Demetri Psaltis, John Hong, Scott Hudson, Jeff Yu Fai Mok, Mark Neifeld, and Nabeel Riza, Dave Brady V 13U7101 4 NS7urtn-a...Jeff Yu Fai Mok, Mark Neifeld, and Nabeel Riza, Dave Brady DTIC Grant AFOSR-85-0332 ELECTE Submitted to: S J’ Dr. Lee Giles Air Force Office of...In addition we examine the capacity when the filter is binarized. Vector-matrix multipliers are fundamental components of many signal processing sys
IHE cross-enterprise document sharing for imaging: design challenges
NASA Astrophysics Data System (ADS)
Noumeir, Rita
2006-03-01
Integrating the Healthcare Enterprise (IHE) has recently published a new integration profile for sharing documents between multiple enterprises. The Cross-Enterprise Document Sharing Integration Profile (XDS) lays the basic framework for deploying regional and national Electronic Health Record (EHR). This profile proposes an architecture based on a central Registry that holds metadata information describing published Documents residing in one or multiple Documents Repositories. As medical images constitute important information of the patient health record, it is logical to extend the XDS Integration Profile to include images. However, including images in the EHR presents many challenges. The complete image set is very large; it is useful for radiologists and other specialists such as surgeons and orthopedists. The imaging report, on the other hand, is widely needed and its broad accessibility is vital for achieving optimal patient care. Moreover, a subset of relevant images may also be of wide interest along with the report. Therefore, IHE recently published a new integration profile for sharing images and imaging reports between multiple enterprises. This new profile, the Cross-Enterprise Document Sharing for Imaging (XDS-I), is based on the XDS architecture. The XDS-I integration solution that is published as part of the IHE Technical Framework is the result of an extensive investigation effort of several design solutions. This paper presents and discusses the design challenges and the rationales behind the design decisions of the IHE XDS-I Integration Profile, for a better understanding and appreciation of the final published solution.
Rectification of curved document images based on single view three-dimensional reconstruction.
Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang
2016-10-01
Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.
Imaging Systems: What, When, How.
ERIC Educational Resources Information Center
Lunin, Lois F.; And Others
1992-01-01
The three articles in this special section on document image files discuss intelligent character recognition, including comparison with optical character recognition; selection of displays for document image processing, focusing on paperlike displays; and imaging hardware, software, and vendors, including guidelines for system selection. (MES)
Digitization of medical documents: an X-Windows application for fast scanning.
Muñoz, A; Salvador, C H; Gonzalez, M A; Dueñas, A
1992-01-01
This paper deals with digitization, using a commercial scanner, of medical documents as still images for introduction into a computer-based Information System. Document management involves storing, editing and transmission. This task has usually been approached from the perspective of the difficulties posed by radiologic images because of their indisputable qualitative and quantitative significance. However, healthcare activities require the management of many other types of documents and involve the requirements of numerous users. One key to document management will be the availability of a digitizer to deal with the greatest possible number of different types of documents. This paper describes the relevant aspects of documents and the technical specifications that digitizers must fulfill. The concept of document type is introduced as the ideal set of digitizing parameters for a given document. The use of document type parameters can drastically reduce the time the user spends in scanning sessions. Presentation is made of an application based on Unix, X-Windows and OSF/Motif, with a GPIB interface, implemented around the document type concept. Finally, the results of the evaluation of the application are presented, focusing on the user interface, as well as on the viewing of color images in an X-Windows environment and the use of lossy algorithms in the compression of medical images.
Medication order communication using fax and document-imaging technologies.
Simonian, Armen I
2008-03-15
The implementation of fax and document-imaging technology to electronically communicate medication orders from nursing stations to the pharmacy is described. The evaluation of a commercially available pharmacy order imaging system to improve order communication and to make document retrieval more efficient led to the selection and customization of a system already licensed and used in seven affiliated hospitals. The system consisted of existing fax machines and document-imaging software that would capture images of written orders and send them from nursing stations to a central database server. Pharmacists would then retrieve the images and enter the orders in an electronic medical record system. The pharmacy representatives from all seven hospitals agreed on the configuration and functionality of the custom application. A 30-day trial of the order imaging system was successfully conducted at one of the larger institutions. The new system was then implemented at the remaining six hospitals over a period of 60 days. The transition from a paper-order system to electronic communication via a standardized pharmacy document management application tailored to the specific needs of this health system was accomplished. A health system with seven affiliated hospitals successfully implemented electronic communication and the management of inpatient paper-chart orders by using faxes and document-imaging technology. This standardized application eliminated the problems associated with the hand delivery of paper orders, the use of the pneumatic tube system, and the printing of traditional faxes.
Web-based document and content management with off-the-shelf software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuster, J
1999-03-18
This, then, is the current status of the project: Since we made the switch to Intradoc, we are now treating the project as a document and image management system. In reality, it could be considered a document and content management system since we can manage almost any file input to the system such as video or audio. At present, however, we are concentrating on images. As mentioned above, my CRADA funding was only targeted at including thumbnails of images in Intradoc. We still had to modify Intradoc so that it would compress images submitted to the system. All processing ofmore » files submitted to Intradoc is handled in what is called the Document Refinery. Even though MrSID created thumbnails in the process of compressing an image, work needed to be done to somehow build this capability into the Document Refinery. Therefore we made the decision to contract the Intradoc Engineering Team to perform this custom development work. To make Intradoc even more capable of handling images, we have also contracted for customization of the Document Refinery to accept Adobe PhotoShop and Illustrator file in their native format.« less
Girelli, Carlos Magno Alves
2016-05-01
Fingerprints present in false identity documents were found on the web. In some cases, laterally reversed (mirrored) images of a same fingerprint were observed in different documents. In the present work, 100 fingerprints images downloaded from the web, as well as their reversals obtained by image editing, were compared between themselves and against the database of the Brazilian Federal Police AFIS, in order to better understand trends about this kind of forgery in Brazil. Some image editing effects were observed in the analyzed fingerprints: addition of artifacts (such as watermarks), image rotation, image stylization, lateral reversal and tonal reversal. Discussion about lateral reversals' detection is presented in this article, as well as suggestion to reduce errors due to missed HIT decisions between reversed fingerprints. The present work aims to highlight the importance of the fingerprints' analysis when performing document examination, especially when only copies of documents are available, something very common in Brazil. Besides the intrinsic features of the fingermarks considered in three levels of details by ACE-V methodology, some visual features of the fingerprints images can be helpful to identify sources of forgeries and modus operandi, such as: limits and image contours, fails in the friction ridges caused by excess or lack of inking and presence of watermarks and artifacts arising from the background. Based on the agreement of such features in fingerprints present in different identity documents and also on the analysis of the time and location where the documents were seized, it is possible to highlight potential links between apparently unconnected crimes. Therefore, fingerprints have potential to reduce linkage blindness and the present work suggests the analysis of fingerprints when profiling false identity documents, as well as the inclusion of fingerprints features in the profile of the documents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2017-04-01
With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.
Ns-scaled time-gated fluorescence lifetime imaging for forensic document examination
NASA Astrophysics Data System (ADS)
Zhong, Xin; Wang, Xinwei; Zhou, Yan
2018-01-01
A method of ns-scaled time-gated fluorescence lifetime imaging (TFLI) is proposed to distinguish different fluorescent substances in forensic document examination. Compared with Video Spectral Comparator (VSC) which can examine fluorescence intensity images only, TFLI can detect questioned documents like falsification or alteration. TFLI system can enhance weak signal by accumulation method. The two fluorescence intensity images of the interval delay time tg are acquired by ICCD and fitted into fluorescence lifetime image. The lifetimes of fluorescence substances are represented by different colors, which make it easy to detect the fluorescent substances and the sequence of handwritings. It proves that TFLI is a powerful tool for forensic document examination. Furthermore, the advantages of TFLI system are ns-scaled precision preservation and powerful capture capability.
Scalable ranked retrieval using document images
NASA Astrophysics Data System (ADS)
Jain, Rajiv; Oard, Douglas W.; Doermann, David
2013-12-01
Despite the explosion of text on the Internet, hard copy documents that have been scanned as images still play a significant role for some tasks. The best method to perform ranked retrieval on a large corpus of document images, however, remains an open research question. The most common approach has been to perform text retrieval using terms generated by optical character recognition. This paper, by contrast, examines whether a scalable segmentation-free image retrieval algorithm, which matches sub-images containing text or graphical objects, can provide additional benefit in satisfying a user's information needs on a large, real world dataset. Results on 7 million scanned pages from the CDIP v1.0 test collection show that content based image retrieval finds a substantial number of documents that text retrieval misses, and that when used as a basis for relevance feedback can yield improvements in retrieval effectiveness.
Global and Local Features Based Classification for Bleed-Through Removal
NASA Astrophysics Data System (ADS)
Hu, Xiangyu; Lin, Hui; Li, Shutao; Sun, Bin
2016-12-01
The text on one side of historical documents often seeps through and appears on the other side, so the bleed-through is a common problem in historical document images. It makes the document images hard to read and the text difficult to recognize. To improve the image quality and readability, the bleed-through has to be removed. This paper proposes a global and local features extraction based bleed-through removal method. The Gaussian mixture model is used to get the global features of the images. Local features are extracted by the patch around each pixel. Then, the extreme learning machine classifier is utilized to classify the scanned images into the foreground text and the bleed-through component. Experimental results on real document image datasets show that the proposed method outperforms the state-of-the-art bleed-through removal methods and preserves the text strokes well.
New public dataset for spotting patterns in medieval document images
NASA Astrophysics Data System (ADS)
En, Sovann; Nicolas, Stéphane; Petitjean, Caroline; Jurie, Frédéric; Heutte, Laurent
2017-01-01
With advances in technology, a large part of our cultural heritage is becoming digitally available. In particular, in the field of historical document image analysis, there is now a growing need for indexing and data mining tools, thus allowing us to spot and retrieve the occurrences of an object of interest, called a pattern, in a large database of document images. Patterns may present some variability in terms of color, shape, or context, making the spotting of patterns a challenging task. Pattern spotting is a relatively new field of research, still hampered by the lack of available annotated resources. We present a new publicly available dataset named DocExplore dedicated to spotting patterns in historical document images. The dataset contains 1500 images and 1464 queries, and allows the evaluation of two tasks: image retrieval and pattern localization. A standardized benchmark protocol along with ad hoc metrics is provided for a fair comparison of the submitted approaches. We also provide some first results obtained with our baseline system on this new dataset, which show that there is room for improvement and that should encourage researchers of the document image analysis community to design new systems and submit improved results.
Establishing binarity amongst Galactic RV Tauri stars with a disc⋆
NASA Astrophysics Data System (ADS)
Manick, Rajeev; Van Winckel, Hans; Kamath, Devika; Hillen, Michel; Escorza, Ana
2017-01-01
Context. Over the last few decades it has become more evident that binarity is a prevalent phenomenon amongst RV Tauri stars with a disc. This study is a contribution to comprehend the role of binarity upon late stages of stellar evolution. Aims: In this paper we determine the binary status of six Galactic RV Tauri stars, namely DY Ori, EP Lyr, HP Lyr, IRAS 17038-4815, IRAS 09144-4933, and TW Cam, which are surrounded by a dusty disc. The radial velocities are contaminated by high-amplitude pulsations. We disentangle the pulsations from the orbital signal in order to determine accurate orbital parameters. We also place them on the HR diagram, thereby establishing their evolutionary nature. Methods: We used high-resolution spectroscopic time series obtained from the HERMES and CORALIE spectrographs mounted on the Flemish Mercator and Swiss Leonhard Euler Telescopes, respectively. An updated ASAS/AAVSO photometric time series is analysed to complement the spectroscopic pulsation search and to clean the radial velocities from the pulsations. The pulsation-cleaned orbits are fitted with a Keplerian model to determine the spectroscopic orbital parameters. We also calibrated a PLC relationship using type II cepheids in the LMC and apply the relation to our Galactic sample to obtain accurate distances and hence luminosities. Results: All six of the Galactic RV Tauri stars included in this study are binaries with orbital periods ranging between 650 and 1700 days and with eccentricities between 0.2 and 0.6. The mass functions range between 0.08 to 0.55 M⊙ which points to an unevolved low-mass companion. In the photometric time series we detect a long-term variation on the timescale of the orbital period for IRAS 17038-4815, IRAS 09144-4933, and TW Cam. Our derived stellar luminosities indicate that all except DY Ori and EP Lyr are post-AGB stars. DY Ori and EP Lyr are likely examples of the recently discovered dusty post-RGB stars. Conclusions: The orbital parameters strongly suggest that the evolution of these stars was interrupted by a strong phase of binary interaction during or even prior to the AGB. The observed eccentricities and long orbital periods among these stars provide a challenge to the standard theory of binary evolution. Based on observations made with the Flemish Mercator Telescope and the Swiss Leonhard Euler Telescope.Radial velocity tables are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/597/A129
NASA Astrophysics Data System (ADS)
Zorec, J.; Frémat, Y.; Domiciano de Souza, A.; Royer, F.; Cidale, L.; Hubert, A.-M.; Semaan, T.; Martayan, C.; Cochetti, Y. R.; Arias, M. L.; Aidelman, Y.; Stee, P.
2016-11-01
Context. Among intermediate-mass and massive stars, Be stars are the fastest rotators in the main sequence (MS) and, as such, these stars are a cornerstone to validate models of structure and evolution of rotating stars. Several phenomena, however, induce under- or overestimations either of their apparent Vsini, or true velocity V. Aims: In the present contribution we aim at obtaining distributions of true rotational velocities corrected for systematic effects induced by the rapid rotation itself, macroturbulent velocities, and binarity. Methods: We study a set of 233 Be stars by assuming they have inclination angles distributed at random. We critically discuss the methods of Cranmer and Lucy-Richardson, which enable us to transform a distribution of projected velocities into another distribution of true rotational velocities, where the gravitational darkening effect on the Vsini parameter is considered in different ways. We conclude that iterative algorithm by Lucy-Richardson responds at best to the purposes of the present work, but it requires a thorough determination of the stellar fundamental parameters. Results: We conclude that once the mode of ratios of the true velocities of Be stars attains the value V/Vc ≃ 0.77 in the main-sequence (MS) evolutionary phase, it remains unchanged up to the end of the MS lifespan. The statistical corrections found on the distribution of ratios V/Vc for overestimations of Vsini, due to macroturbulent motions and binarity, produce a shift of this distribution toward lower values of V/Vc when Be stars in all MS evolutionary stages are considered together. The mode of the final distribution obtained is at V/Vc ≃ 0.65. This distribution has a nearly symmetric distribution and shows that the Be phenomenon is characterized by a wide range of true velocity ratios 0.3 ≲ V/Vc ≲ 0.95. It thus suggests that the probability that Be stars are critical rotators is extremely low. Conclusions: The corrections attempted in the present work represent an initial step to infer indications about the nature of the Be-star surface rotation that will be studied in the second paper of this series. Full Tables 1 and 4 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/595/A132
Old document image segmentation using the autocorrelation function and multiresolution analysis
NASA Astrophysics Data System (ADS)
Mehri, Maroua; Gomez-Krämer, Petra; Héroux, Pierre; Mullot, Rémy
2013-01-01
Recent progress in the digitization of heterogeneous collections of ancient documents has rekindled new challenges in information retrieval in digital libraries and document layout analysis. Therefore, in order to control the quality of historical document image digitization and to meet the need of a characterization of their content using intermediate level metadata (between image and document structure), we propose a fast automatic layout segmentation of old document images based on five descriptors. Those descriptors, based on the autocorrelation function, are obtained by multiresolution analysis and used afterwards in a specific clustering method. The method proposed in this article has the advantage that it is performed without any hypothesis on the document structure, either about the document model (physical structure), or the typographical parameters (logical structure). It is also parameter-free since it automatically adapts to the image content. In this paper, firstly, we detail our proposal to characterize the content of old documents by extracting the autocorrelation features in the different areas of a page and at several resolutions. Then, we show that is possible to automatically find the homogeneous regions defined by similar indices of autocorrelation without knowledge about the number of clusters using adapted hierarchical ascendant classification and consensus clustering approaches. To assess our method, we apply our algorithm on 316 old document images, which encompass six centuries (1200-1900) of French history, in order to demonstrate the performance of our proposal in terms of segmentation and characterization of heterogeneous corpus content. Moreover, we define a new evaluation metric, the homogeneity measure, which aims at evaluating the segmentation and characterization accuracy of our methodology. We find a 85% of mean homogeneity accuracy. Those results help to represent a document by a hierarchy of layout structure and content, and to define one or more signatures for each page, on the basis of a hierarchical representation of homogeneous blocks and their topology.
Word spotting for handwritten documents using Chamfer Distance and Dynamic Time Warping
NASA Astrophysics Data System (ADS)
Saabni, Raid M.; El-Sana, Jihad A.
2011-01-01
A large amount of handwritten historical documents are located in libraries around the world. The desire to access, search, and explore these documents paves the way for a new age of knowledge sharing and promotes collaboration and understanding between human societies. Currently, the indexes for these documents are generated manually, which is very tedious and time consuming. Results produced by state of the art techniques, for converting complete images of handwritten documents into textual representations, are not yet sufficient. Therefore, word-spotting methods have been developed to archive and index images of handwritten documents in order to enable efficient searching within documents. In this paper, we present a new matching algorithm to be used in word-spotting tasks for historical Arabic documents. We present a novel algorithm based on the Chamfer Distance to compute the similarity between shapes of word-parts. Matching results are used to cluster images of Arabic word-parts into different classes using the Nearest Neighbor rule. To compute the distance between two word-part images, the algorithm subdivides each image into equal-sized slices (windows). A modified version of the Chamfer Distance, incorporating geometric gradient features and distance transform data, is used as a similarity distance between the different slices. Finally, the Dynamic Time Warping (DTW) algorithm is used to measure the distance between two images of word-parts. By using the DTW we enabled our system to cluster similar word-parts, even though they are transformed non-linearly due to the nature of handwriting. We tested our implementation of the presented methods using various documents in different writing styles, taken from Juma'a Al Majid Center - Dubai, and obtained encouraging results.
Ensemble methods with simple features for document zone classification
NASA Astrophysics Data System (ADS)
Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing
2012-01-01
Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.
Caetano dos Santos, Florentino Luciano; Skottman, Heli; Juuti-Uusitalo, Kati; Hyttinen, Jari
2016-01-01
Aims A fast, non-invasive and observer-independent method to analyze the homogeneity and maturity of human pluripotent stem cell (hPSC) derived retinal pigment epithelial (RPE) cells is warranted to assess the suitability of hPSC-RPE cells for implantation or in vitro use. The aim of this work was to develop and validate methods to create ensembles of state-of-the-art texture descriptors and to provide a robust classification tool to separate three different maturation stages of RPE cells by using phase contrast microscopy images. The same methods were also validated on a wide variety of biological image classification problems, such as histological or virus image classification. Methods For image classification we used different texture descriptors, descriptor ensembles and preprocessing techniques. Also, three new methods were tested. The first approach was an ensemble of preprocessing methods, to create an additional set of images. The second was the region-based approach, where saliency detection and wavelet decomposition divide each image in two different regions, from which features were extracted through different descriptors. The third method was an ensemble of Binarized Statistical Image Features, based on different sizes and thresholds. A Support Vector Machine (SVM) was trained for each descriptor histogram and the set of SVMs combined by sum rule. The accuracy of the computer vision tool was verified in classifying the hPSC-RPE cell maturation level. Dataset and Results The RPE dataset contains 1862 subwindows from 195 phase contrast images. The final descriptor ensemble outperformed the most recent stand-alone texture descriptors, obtaining, for the RPE dataset, an area under ROC curve (AUC) of 86.49% with the 10-fold cross validation and 91.98% with the leave-one-image-out protocol. The generality of the three proposed approaches was ascertained with 10 more biological image datasets, obtaining an average AUC greater than 97%. Conclusions Here we showed that the developed ensembles of texture descriptors are able to classify the RPE cell maturation stage. Moreover, we proved that preprocessing and region-based decomposition improves many descriptors’ accuracy in biological dataset classification. Finally, we built the first public dataset of stem cell-derived RPE cells, which is publicly available to the scientific community for classification studies. The proposed tool is available at https://www.dei.unipd.it/node/2357 and the RPE dataset at http://www.biomeditech.fi/data/RPE_dataset/. Both are available at https://figshare.com/s/d6fb591f1beb4f8efa6f. PMID:26895509
Optical joint transform correlation on the DMD. [deformable mirror device
NASA Technical Reports Server (NTRS)
Knopp, Jerome; Juday, Richard D.
1989-01-01
Initial experimental investigation of the deformable mirror device (DMD) in a joint optical transform correlation is reported. The inverted cloverleaf version of the DMD, in which form the DMD is phase-mostly but of limited phase range, is used. Binarized joint Fourier transforms were calculated for similar and dissimilar objects and written onto the DMD. Inverse Fourier transform was done in a diffraction order for which the DMD shows phase-mostly modulation. Matched test objects produced sharp correlation, distinct objects did not. Further studies are warranted and they are outlined.
VizieR Online Data Catalog: Binarity in planetary nebula central stars (De Marco+ 2013)
NASA Astrophysics Data System (ADS)
De Marco O.; Passy, J.-C.; Frew, D. J.; Moe, M.; Jacoby, G. H.
2014-01-01
The sample presented here consists of 30 central stars of PN which were selected solely based on their low PN surface brightness (radius of the PN is larger than ~25arcsec in most cases) as well as on the faint V magnitudes of their central stars. The observations were acquired during eight nights between 2007 October 30 and November 6 at the 2.1-m telescope at the Kitt Peak National Observatory. However, the data from nights 2 and 8 were not photometric. (5 data files).
NASA Astrophysics Data System (ADS)
Skarka, M.; Liška, J.; Dřevěný, R.; Sódor, Á.; Barnes, T.; Kolenberg, K.
2018-04-01
We comment on short- and long-term pulsation period variations of Z CVn, a classical RR Lyrae star with the Blazhko effect. Z CVn shows cyclic-like O-C diagram that can be interpreted as a consequence of binarity throught the light travel time effect. We show that this hypothesis is false and that the observed long-term period variations must be caused by some effect that is intrinsic to the star. We also show that the Blazhko period is not simply anti-correlated with the long-term period variations as was suggested by previous authors.
IMAGING THE ELUSIVE H-POOR GAS IN THE HIGH adf PLANETARY NEBULA NGC 6778
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Rojas, Jorge; Corradi, Romano L. M.; Jones, David
We present the first direct image of the high-metallicity gas component in a planetary nebula (NGC 6778), taken with the OSIRIS Blue Tunable Filter centered on the O ii λ 4649+50 Å optical recombination lines (ORLs) at the 10.4 m Gran Telescopio Canarias. We show that the emission of these faint O ii ORLs is concentrated in the central parts of the planetary nebula and is not spatially coincident either with emission coming from the bright [O iii] λ 5007 Å collisionally excited line (CEL) or the bright H α recombination line. From monochromatic emission line maps taken with VIMOSmore » at the 8.2 m Very Large Telescope, we find that the spatial distribution of the emission from the auroral [O iii] λ 4363 line resembles that of the O ii ORLs but differs from nebular [O iii] λ 5007 CEL distribution, implying a temperature gradient inside the planetary nebula. The centrally peaked distribution of the O ii emission and the differences with the [O iii] and H i emission profiles are consistent with the presence of an H-poor gas whose origin may be linked to the binarity of the central star. However, determination of the spatial distribution of the ORLs and CELs in other PNe and a comparison of their dynamics are needed to further constrain the geometry and ejection mechanism of the metal-rich (H-poor) component and hence, understand the origin of the abundance discrepancy problem in PNe.« less
Ng, Danny Siu-Chun; Bakthavatsalam, Malini; Lai, Frank Hiu-Ping; Cheung, Carol Yim-Lui; Cheung, Gemmy Chu-Ming; Tang, Fang Yao; Tsang, Chi Wai; Lai, Timothy Yuk-Yau; Wong, Tien Yin; Brelén, Mårten Erik
2017-02-01
The purpose of this study was to classify exudative maculopathy by the presence of pachyvessels on en face swept-source optical coherence tomography (SSOCT). Consecutive patients with signs of exudative maculopathy underwent SSOCT, fluorescein and indocyanine green angiography (ICGA), ultra-widefield fundus color photography, and autofluorescence examinations. Images were analyzed in a masked fashion by two sets of four examiners in different sessions: (1) the presence of pachyvessels in en face OCT and (2) features of exudative maculopathy in conventional imaging modalities. Quantitative data obtained were subfoveal choroidal thickness (SFCT) and choroidal vascularity index (CVI), which was the ratio of choroidal vessels lumen area to a specified choroidal area from binarized cross-sectional OCT scans. Pachyvessels was observed in 38 (52.1%) of 73 eyes. The pachyvessels group was associated with younger age (69.1 ± 9.4 years, odds ratio [OR] = 0.95, 95% confidence interval [95% CI] = 0.90-0.97, P = 0.04), presence of polypoidal lesions (OR = 3.27, 95% CI = 1.24-8.62, P = 0.01), increased SFCT (OR = 1.08, 95% CI = 1.02-1.14, P < 0.01), and increased CVI (65.4 ± 5.3, OR = 1.12, 95% CI = 1.02-1.23, P = 0.01). In multivariate regression, CVI significantly correlated with pachyvessels (OR = 1.24, 95% CI = 1.03-1.55, P = 0.04). Exudative maculopathy could be classified based on differences in choroidal vasculature morphology. Current results implied that choroidal hemodynamics may be relevant to variable natural history and treatment response in neovascular AMD and polypoidal choroidal vasculopathy.
Mase, Tomoko; Ishibazawa, Akihiro; Nagaoka, Taiji; Yokota, Harumasa; Yoshida, Akitoshi
2016-07-01
We quantitatively analyzed the features of a radial peripapillary capillary (RPC) network visualized using wide-field montage optical coherence tomography (OCT) angiography in healthy human eyes. Twenty eyes of 20 healthy subjects were recruited. En face 3 × 3-mm OCT angiograms of multiple locations in the posterior pole were acquired using the RTVue XR Avanti, and wide-field montage images of the RPC were created. To evaluate the RPC density, the montage images were binarized and skeletonized. The correlation between the RPC density and the retinal nerve fiber layer (RNFL) thickness measured by an OCT circle scan was investigated. The RPC at the temporal retina was detected as far as 7.6 ± 0.7 mm from the edge of the optic disc but not around the perifoveal area within 0.9 ± 0.1 mm of the fovea. Capillary-free zones beside the first branches of the arterioles were significantly (P < 0.0001) narrower than those beside the second ones. The RPC densities at 0.5, 2.5, and 5 mm from the optic disc edge were 13.6 ± 0.8, 11.9 ± 0.9, and 10.4 ± 0.9 mm-1. The RPC density also was correlated significantly (r = 0.64, P < 0.0001) with the RNFL thickness, with the greatest density in the inferotemporal region. Montage OCT angiograms can visualize expansion of the RPC network. The RPC is present in the superficial peripapillary retina in proportion to the RNFL thickness, supporting the idea that the RPC may be the vascular network primarily responsible for RNFL nourishment.
Almasi, Sepideh; Xu, Xiaoyin; Ben-Zvi, Ayal; Lacoste, Baptiste; Gu, Chenghua; Miller, Eric L
2015-02-01
A novel approach to determine the global topological structure of a microvasculature network from noisy and low-resolution fluorescence microscopy data that does not require the detailed segmentation of the vessel structure is proposed here. The method is most appropriate for problems where the tortuosity of the network is relatively low and proceeds by directly computing a piecewise linear approximation to the vasculature skeleton through the construction of a graph in three dimensions whose edges represent the skeletal approximation and vertices are located at Critical Points (CPs) on the microvasculature. The CPs are defined as vessel junctions or locations of relatively large curvature along the centerline of a vessel. Our method consists of two phases. First, we provide a CP detection technique that, for junctions in particular, does not require any a priori geometric information such as direction or degree. Second, connectivity between detected nodes is determined via the solution of a Binary Integer Program (BIP) whose variables determine whether a potential edge between nodes is or is not included in the final graph. The utility function in this problem reflects both intensity-based and structural information along the path connecting the two nodes. Qualitative and quantitative results confirm the usefulness and accuracy of this method. This approach provides a mean of correctly capturing the connectivity patterns in vessels that are missed by more traditional segmentation and binarization schemes because of imperfections in the images which manifest as dim or broken vessels. Copyright © 2014 Elsevier B.V. All rights reserved.
Page segmentation using script identification vectors: A first look
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, J.; Cannon, M.; Kelly, P.
1997-07-01
Document images in which different scripts, such as Chinese and Roman, appear on a single page pose a problem for optical character recognition (OCR) systems. This paper explores the use of script identification vectors in the analysis of multilingual document images. A script identification vector is calculated for each connected component in a document. The vector expresses the closest distance between the component and templates developed for each of thirteen scripts, including Arabic, Chinese, Cyrillic, and Roman. The authors calculate the first three principal components within the resulting thirteen-dimensional space for each image. By mapping these components to red, green,more » and blue, they can visualize the information contained in the script identification vectors. The visualization of several multilingual images suggests that the script identification vectors can be used to segment images into script-specific regions as large as several paragraphs or as small as a few characters. The visualized vectors also reveal distinctions within scripts, such as font in Roman documents, and kanji vs. kana in Japanese. Results are best for documents containing highly dissimilar scripts such as Roman and Japanese. Documents containing similar scripts, such as Roman and Cyrillic will require further investigation.« less
Restoring warped document images through 3D shape modeling.
Tan, Chew Lim; Zhang, Li; Zhang, Zheng; Xia, Tao
2006-02-01
Scanning a document page from a thick bound volume often results in two kinds of distortions in the scanned image, i.e., shade along the "spine" of the book and warping in the shade area. In this paper, we propose an efficient restoration method based on the discovery of the 3D shape of a book surface from the shading information in a scanned document image. From a technical point of view, this shape from shading (SFS) problem in real-world environments is characterized by 1) a proximal and moving light source, 2) Lambertian reflection, 3) nonuniform albedo distribution, and 4) document skew. Taking all these factors into account, we first build practical models (consisting of a 3D geometric model and a 3D optical model) for the practical scanning conditions to reconstruct the 3D shape of the book surface. We next restore the scanned document image using this shape based on deshading and dewarping models. Finally, we evaluate the restoration results by comparing our estimated surface shape with the real shape as well as the OCR performance on original and restored document images. The results show that the geometric and photometric distortions are mostly removed and the OCR results are improved markedly.
Requirements for a documentation of the image manipulation processes within PACS
NASA Astrophysics Data System (ADS)
Retter, Klaus; Rienhoff, Otto; Karsten, Ch.; Prince, Hazel E.
1990-08-01
This paper discusses to which extent manipulation functions which have been applied to images handled in PACS should be documented. After postulating an increasing amount of postprocessing features on PACS-consoles, legal, educational and medical reasons for a documentation of image manipulation processes are presented. Besides legal necessities, aspects of storage capacity, response time, and potential uses determine the extent of this documentation. Is there a specific kind of manipulation functions which has to be documented generally? Should the physician decide which parts of the various pathways he tries are recorded by the system? To distinguish, for example, between reversible and irreversible functions or between interactive and non-interactive functions is one step towards a solution. Another step is to establish definitions for terms like "raw" and "final" image. The paper systematizes these questions and offers strategic help. The answers will have an important impact on PACS design and functionality.
Reeves, Anthony P.; Xie, Yiting; Liu, Shuang
2017-01-01
Abstract. With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset. PMID:28612037
Combining local scaling and global methods to detect soil pore space
NASA Astrophysics Data System (ADS)
Martin-Sotoca, Juan Jose; Saa-Requejo, Antonio; Grau, Juan B.; Tarquis, Ana M.
2017-04-01
The characterization of the spatial distribution of soil pore structures is essential to obtain different parameters that will influence in several models related to water flow and/or microbial growth processes. The first step in pore structure characterization is obtaining soil images that best approximate reality. Over the last decade, major technological advances in X-ray computed tomography (CT) have allowed for the investigation and reconstruction of natural porous media architectures at very fine scales. The subsequent step is delimiting the pore structure (pore space) from the CT soil images applying a thresholding. Many times we could find CT-scan images that show low contrast at the solid-void interface that difficult this step. Different delimitation methods can result in different spatial distributions of pores influencing the parameters used in the models. Recently, new local segmentation method using local greyscale value (GV) concentration variabilities, based on fractal concepts, has been presented. This method creates singularity maps to measure the GV concentration at each point. The C-A method was combined with the singularity map approach (Singularity-CA method) to define local thresholds that can be applied to binarize CT images. Comparing this method with classical methods, such as Otsu and Maximum Entropy, we observed that more pores can be detected mainly due to its ability to amplify anomalous concentrations. However, it delineated many small pores that were incorrect. In this work, we present an improve version of Singularity-CA method that avoid this problem basically combining it with the global classical methods. References Martín-Sotoca, J.J., A. Saa-Requejo, J.B. Grau, A.M. Tarquis. New segmentation method based on fractal properties using singularity maps. Geoderma, 287, 40-53, 2017. Martín-Sotoca, J.J, A. Saa-Requejo, J.B. Grau, A.M. Tarquis. Local 3D segmentation of soil pore space based on fractal properties using singularity maps. Geoderma, http://dx.doi.org/10.1016/j.geoderma.2016.11.029. Torre, Iván G., Juan C. Losada and A.M. Tarquis. Multiscaling properties of soil images. Biosystems Engineering, http://dx.doi.org/10.1016/j.biosystemseng.2016.11.006.
Method and apparatus for imaging and documenting fingerprints
Fernandez, Salvador M.
2002-01-01
The invention relates to a method and apparatus for imaging and documenting fingerprints. A fluorescent dye brought in intimate proximity with the lipid residues of a latent fingerprint is caused to fluoresce on exposure to light energy. The resulting fluorescing image may be recorded photographically.
NASA Astrophysics Data System (ADS)
Suzuki, Mototsugu; Akiba, Norimitsu; Kurosawa, Kenji; Kuroki, Kenro; Akao, Yoshinori; Higashikawa, Yoshiyasu
2016-01-01
We applied a wide-field time-resolved luminescence (TRL) method with a pulsed laser and a gated intensified charge coupled device (ICCD) for deciphering obliterated documents for use in forensic science. The TRL method can nondestructively measure the dynamics of luminescence, including fluorescence and phosphorescence lifetimes, which prove to be useful parameters for image detection. First, we measured the TRL spectra of four brands of black porous-tip pen inks on paper to estimate their luminescence lifetimes. Next, we acquired the TRL images of 12 obliterated documents at various delay times and gate times of the ICCD. The obliterated contents were revealed in the TRL images because of the difference in the luminescence lifetimes of the inks. This method requires no pretreatment, is nondestructive, and has the advantage of wide-field imaging, which makes it is easy to control the gate timing. This demonstration proves that TRL imaging and spectroscopy are powerful tools for forensic document examination.
Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.
Porch, Timothy G; Erpelding, John E
2006-04-30
A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.
Signature detection and matching for document image retrieval.
Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan
2009-11-01
As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.
Main image file tape description
Warriner, Howard W.
1980-01-01
This Main Image File Tape document defines the data content and file structure of the Main Image File Tape (MIFT) produced by the EROS Data Center (EDC). This document also defines an INQUIRY tape, which is just a subset of the MIFT. The format of the INQUIRY tape is identical to the MIFT except for two records; therefore, with the exception of these two records (described elsewhere in this document), every remark made about the MIFT is true for the INQUIRY tape.
A versatile entropic measure of grey level inhomogeneity
NASA Astrophysics Data System (ADS)
Piasecki, Ryszard
2009-06-01
An entropic measure for the analysis of grey level inhomogeneity (GLI) is proposed as a function of length scale. It allows us to quantify the statistical dissimilarity of the actual macrostate and the maximizing entropy of the reference one. The maximums (minimums) of the measure indicate those scales at which higher (lower) average grey level inhomogeneity appears compared to neighbour scales. Even a deeply hidden statistical grey level periodicity can be detected by the equally distant minimums of the measure. The striking effect of multiple intersecting curves (MICs) of the measure has been revealed for pairs of simulated patterns, which differ in shades of grey or symmetry properties only. In turn, for evolving photosphere granulation patterns, the stability in time of the first peak position has been found. Interestingly, the third peak is dominant at initial steps of the evolution. This indicates a temporary grouping of granules at a length scale that may belong to the mesogranulation phenomenon. This behaviour has similarities with that reported by Consolini, Berrilli et al. [G. Consolini, F. Berrilli, A. Florio, E. Pietropaolo, L.A. Smaldone, Astron. Astrophys. 402 (2003) 1115; F. Berrilli, D. Del Moro, S. Russo, G. Consolini, Th. Straus, Astrophys. J. 632 (2005) 677] for binarized granulation images of a different data set.
NASA Astrophysics Data System (ADS)
Jensen-Clem, Rebecca; Duev, Dmitry A.; Riddle, Reed; Salama, Maïssa; Baranec, Christoph; Law, Nicholas M.; Kulkarni, S. R.; Ramprakash, A. N.
2018-01-01
Robo-AO is an autonomous laser guide star adaptive optics (AO) system recently commissioned at the Kitt Peak 2.1 m telescope. With the ability to observe every clear night, Robo-AO at the 2.1 m telescope is the first dedicated AO observatory. This paper presents the imaging performance of the AO system in its first 18 months of operations. For a median seeing value of 1.″44, the average Strehl ratio is 4% in the i\\prime band. After post processing, the contrast ratio under sub-arcsecond seeing for a 2≤slant i\\prime ≤slant 16 primary star is five and seven magnitudes at radial offsets of 0.″5 and 1.″0, respectively. The data processing and archiving pipelines run automatically at the end of each night. The first stage of the processing pipeline shifts and adds the rapid frame rate data using techniques optimized for different signal-to-noise ratios. The second “high-contrast” stage of the pipeline is eponymously well suited to finding faint stellar companions. Currently, a range of scientific programs, including the synthetic tracking of near-Earth asteroids, the binarity of stars in young clusters, and weather on solar system planets are being undertaken with Robo-AO.
NASA Astrophysics Data System (ADS)
Vasilev, Aleksandr S.; Konyakhin, Igor A.; Timofeev, Alexander N.; Lashmanov, Oleg U.; Molev, Fedor V.
2015-05-01
The paper analyzes the construction matters and metrological parameters of the electrooptic converter to control linear displacements of the large structures of the buildings and facilities. The converter includes the base module, the processing module and a set of the reference marks. The base module is the main unit of the system, it includes the receiving optical system and the CMOS photodetector array that realizes the instrument coordinate system that controls the mark coordinates in the space. The methods of the frame-to-frame difference, adaptive threshold filtration, binarization and objects search by the tied areas to detect the marks against accidental contrast background is the basis of the algorithm. The entire algorithm is performed during one image reading stage and is based on the FPGA. The developed and manufactured converter experimental model was tested in laboratory conditions at the metrological bench at the distance between the base module and the mark 50±0.2 m. The static characteristic was read during the experiment of the reference mark displacement at the pitch of 5 mm in the horizontal and vertical directions for the displacement range 400 mm. The converter experimental model error not exceeding ±0.5 mm was obtained in the result of the experiment.
Face antispoofing based on frame difference and multilevel representation
NASA Astrophysics Data System (ADS)
Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad
2017-07-01
Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.
Statistical Techniques for Efficient Indexing and Retrieval of Document Images
ERIC Educational Resources Information Center
Bhardwaj, Anurag
2010-01-01
We have developed statistical techniques to improve the performance of document image search systems where the intermediate step of OCR based transcription is not used. Previous research in this area has largely focused on challenges pertaining to generation of small lexicons for processing handwritten documents and enhancement of poor quality…
Restoring 2D content from distorted documents.
Brown, Michael S; Sun, Mingxuan; Yang, Ruigang; Yun, Lin; Seales, W Brent
2007-11-01
This paper presents a framework to restore the 2D content printed on documents in the presence of geometric distortion and non-uniform illumination. Compared with textbased document imaging approaches that correct distortion to a level necessary to obtain sufficiently readable text or to facilitate optical character recognition (OCR), our work targets nontextual documents where the original printed content is desired. To achieve this goal, our framework acquires a 3D scan of the document's surface together with a high-resolution image. Conformal mapping is used to rectify geometric distortion by mapping the 3D surface back to a plane while minimizing angular distortion. This conformal "deskewing" assumes no parametric model of the document's surface and is suitable for arbitrary distortions. Illumination correction is performed by using the 3D shape to distinguish content gradient edges from illumination gradient edges in the high-resolution image. Integration is performed using only the content edges to obtain a reflectance image with significantly less illumination artifacts. This approach makes no assumptions about light sources and their positions. The results from the geometric and photometric correction are combined to produce the final output.
One-click scanning of large-size documents using mobile phone camera
NASA Astrophysics Data System (ADS)
Liu, Sijiang; Jiang, Bo; Yang, Yuanjie
2016-07-01
Currently mobile apps for document scanning do not provide convenient operations to tackle large-size documents. In this paper, we present a one-click scanning approach for large-size documents using mobile phone camera. After capturing a continuous video of documents, our approach automatically extracts several key frames by optical flow analysis. Then based on key frames, a mobile GPU based image stitching method is adopted to generate a completed document image with high details. There are no extra manual intervention in the process and experimental results show that our app performs well, showing convenience and practicability for daily life.
NASA Astrophysics Data System (ADS)
David, Peter; Hansen, Nichole; Nolan, James J.; Alcocer, Pedro
2015-05-01
The growth in text data available online is accompanied by a growth in the diversity of available documents. Corpora with extreme heterogeneity in terms of file formats, document organization, page layout, text style, and content are common. The absence of meaningful metadata describing the structure of online and open-source data leads to text extraction results that contain no information about document structure and are cluttered with page headers and footers, web navigation controls, advertisements, and other items that are typically considered noise. We describe an approach to document structure and metadata recovery that uses visual analysis of documents to infer the communicative intent of the author. Our algorithm identifies the components of documents such as titles, headings, and body content, based on their appearance. Because it operates on an image of a document, our technique can be applied to any type of document, including scanned images. Our approach to document structure recovery considers a finer-grained set of component types than prior approaches. In this initial work, we show that a machine learning approach to document structure recovery using a feature set based on the geometry and appearance of images of documents achieves a 60% greater F1- score than a baseline random classifier.
Text extraction method for historical Tibetan document images based on block projections
NASA Astrophysics Data System (ADS)
Duan, Li-juan; Zhang, Xi-qun; Ma, Long-long; Wu, Jian
2017-11-01
Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.
A New Photometric Study of Ap and Am Stars in the Infrared
NASA Astrophysics Data System (ADS)
Chen, P. S.; Liu, J. Y.; Shan, H. G.
2017-05-01
In this paper, 426 well known confirmed Ap and Am stars are photometrically studied in the infrared. The 2MASS, Wide-field Infrared Survey Explorer (WISE), and IRAS data are employed to make analyses. The results in this paper have shown that in the 1-3 μm region over 90% Ap and Am stars have no or little infrared excesses, and infrared radiations in the near-infrared from these stars are probably dominated by the free-free emissions. It is also shown that in the 3-12 μm region, the majority of Ap stars and Am stars have very similar behavior, I.e., in the W1-W2 (3.4-4.6 μm) region, over half of Ap and Am stars have clear infrared excesses, which are possibly due to the binarity, the multiplicity, and/or the debris disk, but in the W2-W3 (4.6-12 μm) region they have no or little infrared excess. In addition, in the 12-22 μm region, some of Ap stars and Am stars show the infrared excesses and infrared radiations for these Ap and Am stars are probably due to the free-free emissions. In addition, it is seen that the probability of being the binarity, the multiplicity and/or the debris disk for Am stars is much higher than that for Ap stars. Furthermore, it can be seen that, in general, no relations can be found between infrared colors and spectral types either for Ap stars or for Am stars.
Document Examination: Applications of Image Processing Systems.
Kopainsky, B
1989-12-01
Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.
Path Searching Based Crease Detection for Large Scale Scanned Document Images
NASA Astrophysics Data System (ADS)
Zhang, Jifu; Li, Yi; Li, Shutao; Sun, Bin; Sun, Jun
2017-12-01
Since the large size documents are usually folded for preservation, creases will occur in the scanned images. In this paper, a crease detection method is proposed to locate the crease pixels for further processing. According to the imaging process of contactless scanners, the shading on both sides of the crease usually varies a lot. Based on this observation, a convex hull based algorithm is adopted to extract the shading information of the scanned image. Then, the possible crease path can be achieved by applying the vertical filter and morphological operations on the shading image. Finally, the accurate crease is detected via Dijkstra path searching. Experimental results on the dataset of real scanned newspapers demonstrate that the proposed method can obtain accurate locations of the creases in the large size document images.
Ernst, E J; Speck, P M; Fitzpatrick, J J
2012-01-01
Digital photography is a valuable adjunct to document physical injuries after sexual assault. In order for a digital photograph to have high image quality, there must exist a high level of naturalness. Digital photo documentation has varying degrees of naturalness; however, for a photograph to be natural, specific technical elements for the viewer must be satisfied. No tool was available to rate the naturalness of digital photo documentation of female genital injuries after sexual assault. The Photo Documentation Image Quality Scoring System (PDIQSS) tool was developed to rate technical elements for naturalness. Using this tool, experts evaluated randomly selected digital photographs of female genital injuries captured following sexual assault. Naturalness of female genital injuries following sexual assault was demonstrated when measured in all dimensions.
NASA Astrophysics Data System (ADS)
Bardon, Tiphaine; May, Robert K.; Jackson, J. Bianca; Beentjes, Gabriëlle; de Bruin, Gerrit; Taday, Philip F.; Strlič, Matija
2017-04-01
This study aims to objectively inform curators when terahertz time-domain (TD) imaging set in reflection mode is likely to give well-contrasted images of inscriptions in a complex archival document and is a useful non-invasive alternative to current digitisation processes. To this end, the dispersive refractive indices and absorption coefficients from various archival materials are assessed and their influence on contrast in terahertz images from historical documents is explored. Sepia ink and inks produced with bistre or verdigris mixed with a solution of Arabic gum or rabbit skin glue are unlikely to lead to well-contrasted images. However, dispersions of bone black, ivory black, iron gall ink, malachite, lapis lazuli, minium and vermilion are likely to lead to well-contrasted images. Inscriptions written with lamp black, carbon black and graphite give the best imaging results. The characteristic spectral signatures from iron gall ink, minium and vermilion pellets between 5 and 100 cm-1 relate to a ringing effect at late collection times in TD waveforms transmitted through these pellets. The same ringing effect can be probed in waveforms reflected from iron gall, minium and vermilion ink deposits at the surface of a document. Since TD waveforms collected for each scanning pixel can be Fourier-transformed into spectral information, terahertz TD imaging in reflection mode can serve as a hyperspectral imaging tool. However, chemical recognition and mapping of the ink is currently limited by the fact that the morphology of the document influences more the terahertz spectral response of the document than the resonant behaviour of the ink.
Li, Ming; Jia, Bin; Ding, Liying; Hong, Feng; Ouyang, Yongzhong; Chen, Rui; Zhou, Shumin; Chen, Huanwen; Fang, Xiang
2013-09-01
Molecular images of documents were obtained by sequentially scanning the surface of the document using desorption atmospheric pressure chemical ionization mass spectrometry (DAPCI-MS), which was operated in either a gasless, solvent-free or methanol vapor-assisted mode. The decay process of the ink used for handwriting was monitored by following the signal intensities recorded by DAPCI-MS. Handwritings made using four types of inks on four kinds of paper surfaces were tested. By studying the dynamic decay of the inks, DAPCI-MS imaging differentiated a 10-min old from two 4 h old samples. Non-destructive forensic analysis of forged signatures either handwritten or computer-assisted was achieved according to the difference of the contour in DAPCI images, which was attributed to the strength personalized by different writers. Distinction of the order of writing/stamping on documents and detection of illegal printings were accomplished with a spatial resolution of about 140 µm. A Matlab® written program was developed to facilitate the visualization of the similarity between signature images obtained by DAPCI-MS. The experimental results show that DAPCI-MS imaging provides rich information at the molecular level and thus can be used for the reliable document analysis in forensic applications. © 2013 The Authors. Journal of Mass Spectrometry published by John Wiley & Sons, Ltd.
[The procedure for documentation of digital images in forensic medical histology].
Putintsev, V A; Bogomolov, D V; Fedulova, M V; Gribunov, Iu P; Kul'bitskiĭ, B N
2012-01-01
This paper is devoted to the novel computer technologies employed in the studies of histological preparations. These technologies allow to visualize digital images, structurize the data obtained and store the results in computer memory. The authors emphasize the necessity to properly document digital images obtained during forensic-histological studies and propose the procedure for the formulation of electronic documents in conformity with the relevant technical and legal requirements. It is concluded that the use of digital images as a new study object permits to obviate the drawbacks inherent in the work with the traditional preparations and pass from descriptive microscopy to their quantitative analysis.
EXors and the stellar birthline
NASA Astrophysics Data System (ADS)
Moody, Mackenzie S. L.; Stahler, Steven W.
2017-04-01
We assess the evolutionary status of EXors. These low-mass, pre-main-sequence stars repeatedly undergo sharp luminosity increases, each a year or so in duration. We place into the HR diagram all EXors that have documented quiescent luminosities and effective temperatures, and thus determine their masses and ages. Two alternate sets of pre-main-sequence tracks are used, and yield similar results. Roughly half of EXors are embedded objects, I.e., they appear observationally as Class I or flat-spectrum infrared sources. We find that these are relatively young and are located close to the stellar birthline in the HR diagram. Optically visible EXors, on the other hand, are situated well below the birthline. They have ages of several Myr, typical of classical T Tauri stars. Judging from the limited data at hand, we find no evidence that binarity companions trigger EXor eruptions; this issue merits further investigation. We draw several general conclusions. First, repetitive luminosity outbursts do not occur in all pre-main-sequence stars, and are not in themselves a sign of extreme youth. They persist, along with other signs of activity, in a relatively small subset of these objects. Second, the very existence of embedded EXors demonstrates that at least some Class I infrared sources are not true protostars, but very young pre-main-sequence objects still enshrouded in dusty gas. Finally, we believe that the embedded pre-main-sequence phase is of observational and theoretical significance, and should be included in a more complete account of early stellar evolution.
Electronic Imaging in Admissions, Records & Financial Aid Offices.
ERIC Educational Resources Information Center
Perkins, Helen L.
Over the years, efforts have been made to work more efficiently with the ever increasing number of records and paper documents that cross workers' desks. Filing records on optical disk through electronic imaging is an alternative that many feel is the answer to successful document management. The pioneering efforts in electronic imaging in…
Wavelet domain textual coding of Ottoman script images
NASA Astrophysics Data System (ADS)
Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.
1996-02-01
Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.
Low-Resolution Radial-Velocity Monitoring of Pulsating sdBs in the Kepler Field
NASA Astrophysics Data System (ADS)
Telting, J.; Östensen, R.; Reed, M.; Kiæerad, F.; Farris, L.; Baran, A.; Oreiro, R.; O'Toole, S.
2014-04-01
We present preliminary results from an ongoing spectroscopic campaign to uncover the binary status of the 18 known pulsating subdwarf B stars and the one pulsating BHB star observed with the Kepler spacecraft. During the 2010-2012 observing seasons, we have used the KP4m Mayall, NOT, and WHT telescopes to obtain low-resolution (R˜2000-2500) Balmer-line spectroscopy of our sample stars. We applied a standard cross-correlation technique to derive radial velocities, and find clear evidence for binarity in several of the pulsators, some of which were not previously known to be binaries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokovinin, Andrei; Horch, Elliott P., E-mail: atokovinin@ctio.noao.edu, E-mail: horche2@southernct.edu
Statistical characterization of secondary subsystems in binaries helps to distinguish between various scenarios of multiple-star formation. The Differential Speckle Survey Instrument was used at the Gemini-N telescope for several hours in 2015 July to probe the binarity of 25 secondary components in nearby solar-type binaries. Six new subsystems were resolved, with meaningful detection limits for the remaining targets. The large incidence of secondary subsystems agrees with other similar studies. The newly resolved subsystem HIP 115417 Ba,Bb causes deviations in the observed motion of the outer binary from which an astrometric orbit of Ba,Bb with a period of 117 years ismore » deduced.« less
Author name recognition in degraded journal images
NASA Astrophysics Data System (ADS)
de Bodard de la Jacopière, Aliette; Likforman-Sulem, Laurence
2006-01-01
A method for extracting names in degraded documents is presented in this article. The documents targeted are images of photocopied scientific journals from various scientific domains. Due to the degradation, there is poor OCR recognition, and pieces of other articles appear on the sides of the image. The proposed approach relies on the combination of a low-level textual analysis and an image-based analysis. The textual analysis extracts robust typographic features, while the image analysis selects image regions of interest through anchor components. We report results on the University of Washington benchmark database.
Degraded document image enhancement
NASA Astrophysics Data System (ADS)
Agam, G.; Bal, G.; Frieder, G.; Frieder, O.
2007-01-01
Poor quality documents are obtained in various situations such as historical document collections, legal archives, security investigations, and documents found in clandestine locations. Such documents are often scanned for automated analysis, further processing, and archiving. Due to the nature of such documents, degraded document images are often hard to read, have low contrast, and are corrupted by various artifacts. We describe a novel approach for the enhancement of such documents based on probabilistic models which increases the contrast, and thus, readability of such documents under various degradations. The enhancement produced by the proposed approach can be viewed under different viewing conditions if desired. The proposed approach was evaluated qualitatively and compared to standard enhancement techniques on a subset of historical documents obtained from the Yad Vashem Holocaust museum. In addition, quantitative performance was evaluated based on synthetically generated data corrupted under various degradation models. Preliminary results demonstrate the effectiveness of the proposed approach.
DOT National Transportation Integrated Search
2010-02-01
Over time, the Department of Transportation has accumulated image collections, which document important : aspects of the transportation infrastructure in the Pacific Northwest, project status and construction details. These : images range from paper ...
Structured Forms Reference Set of Binary Images (SFRS)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images (SFRS) (Web, free access) The NIST Structured Forms Database (Special Database 2) consists of 5,590 pages of binary, black-and-white images of synthesized documents. The documents in this database are 12 different tax forms from the IRS 1040 Package X for the year 1988.
Detection of text strings from mixed text/graphics images
NASA Astrophysics Data System (ADS)
Tsai, Chien-Hua; Papachristou, Christos A.
2000-12-01
A robust system for text strings separation from mixed text/graphics images is presented. Based on a union-find (region growing) strategy the algorithm is thus able to classify the text from graphics and adapts to changes in document type, language category (e.g., English, Chinese and Japanese), text font style and size, and text string orientation within digital images. In addition, it allows for a document skew that usually occurs in documents, without skew correction prior to discrimination while these proposed methods such a projection profile or run length coding are not always suitable for the condition. The method has been tested with a variety of printed documents from different origins with one common set of parameters, and the experimental results of the performance of the algorithm in terms of computational efficiency are demonstrated by using several tested images from the evaluation.
The Young L Dwarf 2MASS J11193254-1137466 Is a Planetary-mass Binary
NASA Astrophysics Data System (ADS)
Best, William M. J.; Liu, Michael C.; Dupuy, Trent J.; Magnier, Eugene A.
2017-07-01
We have discovered that the extremely red, low-gravity L7 dwarf 2MASS J11193254-1137466 is a 0.″14 (3.6 au) binary using Keck laser guide star adaptive optics imaging. 2MASS J11193254-1137466 has previously been identified as a likely member of the TW Hydrae Association (TWA). Using our updated photometric distance and proper motion, a kinematic analysis based on the BANYAN II model gives an 82% probability of TWA membership. At TWA’s 10 ± 3 Myr age and using hot-start evolutionary models, 2MASS J11193254-1137466AB is a pair of {3.7}-0.9+1.2 {M}{Jup} brown dwarfs, making it the lowest-mass binary discovered to date. We estimate an orbital period of {90}-50+80 years. One component is marginally brighter in K band but fainter in J band, making this a probable flux-reversal binary, the first discovered with such a young age. We also imaged the spectrally similar TWA L7 dwarf WISEA J114724.10-204021.3 with Keck and found no sign of binarity. Our evolutionary model-derived {T}{eff} estimate for WISEA J114724.10-204021.3 is ≈230 K higher than for 2MASS J11193254-1137466AB, at odds with the spectral similarity of the two objects. This discrepancy suggests that WISEA J114724.10-204021.3 may actually be a tight binary with masses and temperatures very similar to 2MASS J11193254-1137466AB, or further supporting the idea that near-infrared spectra of young ultracool dwarfs are shaped by factors other than temperature and gravity. 2MASS J11193254-1137466AB will be an essential benchmark for testing evolutionary and atmospheric models in the young planetary-mass regime.
NASA Astrophysics Data System (ADS)
Chen, Cheng; Jin, Dakai; Liu, Yinxiao; Wehrli, Felix W.; Chang, Gregory; Snyder, Peter J.; Regatte, Ravinder R.; Saha, Punam K.
2016-09-01
Osteoporosis is associated with increased risk of fractures, which is clinically defined by low bone mineral density. Increasing evidence suggests that trabecular bone (TB) micro-architecture is an important determinant of bone strength and fracture risk. We present an improved volumetric topological analysis algorithm based on fuzzy skeletonization, results of its application on in vivo MR imaging, and compare its performance with digital topological analysis. The new VTA method eliminates data loss in the binarization step and yields accurate and robust measures of local plate-width for individual trabeculae, which allows classification of TB structures on the continuum between perfect plates and rods. The repeat-scan reproducibility of the method was evaluated on in vivo MRI of distal femur and distal radius, and high intra-class correlation coefficients between 0.93 and 0.97 were observed. The method’s ability to detect treatment effects on TB micro-architecture was examined in a 2 years testosterone study on hypogonadal men. It was observed from experimental results that average plate-width and plate-to-rod ratio significantly improved after 6 months and the improvement was found to continue at 12 and 24 months. The bone density of plate-like trabeculae was found to increase by 6.5% (p = 0.06), 7.2% (p = 0.07) and 16.2% (p = 0.003) at 6, 12, 24 months, respectively. While the density of rod-like trabeculae did not change significantly, even at 24 months. A comparative study showed that VTA has enhanced ability to detect treatment effects in TB micro-architecture as compared to conventional method of digital topological analysis for plate/rod characterization in terms of both percent change and effect-size.
Okamoto, Masahiro; Yamashita, Mariko; Ogata, Nahoko
2018-05-01
To determine the effects of an intravitreal injection of ranibizumab (IVR) on the choroidal structure and blood flow in eyes with diabetic macular edema (DME). Twenty-eight consecutive patients with DME who received an IVR and 20 non-diabetic, age-matched controls were followed for 1 month. The eyes with DME were divided into those with prior panretinal photocoagulation (PRP, n = 16) and those without prior PRP (no-PRP, n = 12). The enhanced depth imaging optical coherence tomography (EDI-OCT) scans and Niblack's image binarization were performed to determine the choroidal structure. The choroidal blood flow was determined by laser speckle flowgraphy. The subfoveal choroidal thickness at the baseline was significantly thicker in the no-PRP group than in the PRP-treated group. After IVR, the best-corrected visual acuity (BCVA) and central retinal thickness in eyes with DME were significantly improved compared to the baseline values. There were significant differences in the choroidal thickness, total choroidal area, and choroidal vascularity index between the groups after IVR. Choroidal vascular index and choroidal blood flow were significantly reduced only in the no-PRP group and not in the PRP-treated group. In addition, the correlation between the central retinal thickness and the choroidal blood flow was significant in the no-PRP group (r = 0.47, P < 0.05). A single IVR will reduce the central retinal thickness and improve the BCVA in eyes with DME in both the no-PRP and PRP-treated group. IVR affected the choroidal vasculature and blood flow significantly, and a significant correlation was found between the central retinal thickness and the choroidal blood flow in eyes without PRP.
What's in "Your" File Cabinet? Leveraging Technology for Document Imaging and Storage
ERIC Educational Resources Information Center
Flaherty, William
2011-01-01
Spotsylvania County Public Schools (SCPS) in Virginia uses a document-imaging solution that leverages the features of a multifunction printer (MFP). An MFP is a printer, scanner, fax machine, and copier all rolled into one. It can scan a document and email it all in one easy step. Software is available that allows the MFP to scan bubble sheets and…
Multifractal analysis of 2D gray soil images
NASA Astrophysics Data System (ADS)
González-Torres, Ivan; Losada, Juan Carlos; Heck, Richard; Tarquis, Ana M.
2015-04-01
Soil structure, understood as the spatial arrangement of soil pores, is one of the key factors in soil modelling processes. Geometric properties of individual and interpretation of the morphological parameters of pores can be estimated from thin sections or 3D Computed Tomography images (Tarquis et al., 2003), but there is no satisfactory method to binarized these images and quantify the complexity of their spatial arrangement (Tarquis et al., 2008, Tarquis et al., 2009; Baveye et al., 2010). The objective of this work was to apply a multifractal technique, their singularities (α) and f(α) spectra, to quantify it without applying any threshold (Gónzalez-Torres, 2014). Intact soil samples were collected from four horizons of an Argisol, formed on the Tertiary Barreiras group of formations in Pernambuco state, Brazil (Itapirema Experimental Station). The natural vegetation of the region is tropical, coastal rainforest. From each horizon, showing different porosities and spatial arrangements, three adjacent samples were taken having a set of twelve samples. The intact soil samples were imaged using an EVS (now GE Medical. London, Canada) MS-8 MicroCT scanner with 45 μm pixel-1 resolution (256x256 pixels). Though some samples required paring to fit the 64 mm diameter imaging tubes, field orientation was maintained. References Baveye, P.C., M. Laba, W. Otten, L. Bouckaert, P. Dello, R.R. Goswami, D. Grinev, A. Houston, Yaoping Hu, Jianli Liu, S. Mooney, R. Pajor, S. Sleutel, A. Tarquis, Wei Wang, Qiao Wei, Mehmet Sezgin. Observer-dependent variability of the thresholding step in the quantitative analysis of soil images and X-ray microtomography data. Geoderma, 157, 51-63, 2010. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Tarquis, A.M., R.J. Heck, J.B. Grau; J. Fabregat, M.E. Sanchez and J.M. Antón. Influence of Thresholding in Mass and Entropy Dimension of 3-D Soil Images. Nonlinear Process in Geophysics, 15, 881-891, 2008. Tarquis, A.M., R.J. Heck, D. Andina, A. Alvarez and J.M. Antón. Multifractal analysis and thresholding of 3D soil images. Ecological Complexity, 6, 230-239, 2009. Tarquis, A.M.; D. Giménez, A. Saa, M.C. Díaz. and J.M. Gascó. Scaling and Multiscaling of Soil Pore Systems Determined by Image Analysis. Scaling Methods in Soil Systems. Pachepsky, Radcliffe and Selim Eds., 19-33, 2003. CRC Press, Boca Ratón, Florida. Acknowledgements First author acknowledges the financial support obtained from Soil Imaging Laboratory (University of Gueplh, Canada) in 2014.
Segmentation of singularity maps in the context of soil porosity
NASA Astrophysics Data System (ADS)
Martin-Sotoca, Juan J.; Saa-Requejo, Antonio; Grau, Juan; Tarquis, Ana M.
2016-04-01
Geochemical exploration have found with increasingly interests and benefits of using fractal (power-law) models to characterize geochemical distribution, including concentration-area (C-A) model (Cheng et al., 1994; Cheng, 2012) and concentration-volume (C-V) model (Afzal et al., 2011) just to name a few examples. These methods are based on the singularity maps of a measure that at each point define areas with self-similar properties that are shown in power-law relationships in Concentration-Area plots (C-A method). The C-A method together with the singularity map ("Singularity-CA" method) define thresholds that can be applied to segment the map. Recently, the "Singularity-CA" method has been applied to binarize 2D grayscale Computed Tomography (CT) soil images (Martin-Sotoca et al, 2015). Unlike image segmentation based on global thresholding methods, the "Singularity-CA" method allows to quantify the local scaling property of the grayscale value map in the space domain and determinate the intensity of local singularities. It can be used as a high-pass-filter technique to enhance high frequency patterns usually regarded as anomalies when applied to maps. In this work we will put special attention on how to select the singularity thresholds in the C-A plot to segment the image. We will compare two methods: 1) cross point of linear regressions and 2) Wavelets Transform Modulus Maxima (WTMM) singularity function detection. REFERENCES Cheng, Q., Agterberg, F. P. and Ballantyne, S. B. (1994). The separation of geochemical anomalies from background by fractal methods. Journal of Geochemical Exploration, 51, 109-130. Cheng, Q. (2012). Singularity theory and methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Afzal, P., Fadakar Alghalandis, Y., Khakzad, A., Moarefvand, P. and Rashidnejad Omran, N. (2011) Delineation of mineralization zones in porphyry Cu deposits by fractal concentration-volume modeling. Journal of Geochemical Exploration, 108, 220-232. Martín-Sotoca, J. J., Tarquis, A. M., Saa-Requejo, A. and Grau, J. B. (2015). Pore detection in Computed Tomography (CT) soil images through singularity map analysis. Oral Presentation in PedoFract VIII Congress (June, La Coruña - Spain).
Fast frequency domain method to detect skew in a document image
NASA Astrophysics Data System (ADS)
Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee
2015-12-01
In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.
Compliance Assurance and Enforcement Division Document Repository (CAEDDOCRESP) provides internal and external access of Inspection Records, Enforcement Actions, and National Environmental Protection Act (NEPA) documents to all CAED staff. The respository will also include supporting documents, images, etc.
ERIC Educational Resources Information Center
McConnell, Pamela Jean
1993-01-01
This third in a series of articles on EDIS (Electronic Document Imaging System) technology focuses on organizational issues. Highlights include computer platforms; management information systems; computer-based skills of staff; new technology and change; time factors; financial considerations; document conversion costs; the benefits of EDIS…
Handwritten text line segmentation by spectral clustering
NASA Astrophysics Data System (ADS)
Han, Xuecheng; Yao, Hui; Zhong, Guoqiang
2017-02-01
Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.
Document image archive transfer from DOS to UNIX
NASA Technical Reports Server (NTRS)
Hauser, Susan E.; Gill, Michael J.; Thoma, George R.
1994-01-01
An R&D division of the National Library of Medicine has developed a prototype system for automated document image delivery as an adjunct to the labor-intensive manual interlibrary loan service of the library. The document image archive is implemented by a PC controlled bank of optical disk drives which use 12 inch WORM platters containing bitmapped images of over 200,000 pages of medical journals. Following three years of routine operation which resulted in serving patrons with articles both by mail and fax, an effort is underway to relocate the storage environment from the DOS-based system to a UNIX-based jukebox whose magneto-optical erasable 5 1/4 inch platters hold the images. This paper describes the deficiencies of the current storage system, the design issues of modifying several modules in the system, the alternatives proposed and the tradeoffs involved.
NASA Astrophysics Data System (ADS)
Chu, Devin S.; Do, Tuan; Hees, Aurelien; Ghez, Andrea; Naoz, Smadar; Witzel, Gunther; Sakai, Shoko; Chappell, Samantha; Gautam, Abhimat K.; Lu, Jessica R.; Matthews, Keith
2018-02-01
The star S0-2, which orbits the supermassive black hole (SMBH) in our Galaxy with a period of 16 years, provides the strongest constraint on both the mass of the SMBH and the distance to the Galactic center. S0-2 will soon provide the first measurement of relativistic effects near a SMBH. We report the first limits on the binarity of S0-2 from radial velocity (RV) monitoring, which has implications for both understanding its origin and robustness as a probe of the central gravitational field. With 87 RV measurements, which include 12 new observations that we present, we have the requisite data set to look for RV variations from S0-2‧s orbital model. Using a Lomb–Scargle analysis and orbit-fitting for potential binaries, we detect no RV variation beyond S0-2‧s orbital motion and do not find any significant periodic signal. The lack of a binary companion does not currently distinguish different formation scenarios for S0-2. The upper limit on the mass of a companion star ({M}{comp}) still allowed by our results has a median upper limit of {M}{comp} sin i ≤ 1.6 M ⊙ for periods between 1 and 150 days, the longest period to avoid tidal break-up of the binary. We also investigate the impact of the remaining allowed binary system on the measurement of the relativistic redshift at S0-2‧s closest approach in 2018. While binary star systems are important to consider for this experiment, we find that plausible binaries for S0-2 will not alter a 5σ detection of the relativistic redshift.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chengyuan; De Grijs, Richard; Deng, Licai, E-mail: joshuali@pku.edu.cn, E-mail: grijs@pku.edu.cn
2014-04-01
Using a combination of high-resolution Hubble Space Telescope/Wide-Field and Planetary Camera-2 observations, we explore the physical properties of the stellar populations in two intermediate-age star clusters, NGC 1831 and NGC 1868, in the Large Magellanic Cloud based on their color-magnitude diagrams. We show that both clusters exhibit extended main-sequence turn offs. To explain the observations, we consider variations in helium abundance, binarity, age dispersions, and the fast rotation of the clusters' member stars. The observed narrow main sequence excludes significant variations in helium abundance in both clusters. We first establish the clusters' main-sequence binary fractions using the bulk of themore » clusters' main-sequence stellar populations ≳ 1 mag below their turn-offs. The extent of the turn-off regions in color-magnitude space, corrected for the effects of binarity, implies that age spreads of order 300 Myr may be inferred for both clusters if the stellar distributions in color-magnitude space were entirely due to the presence of multiple populations characterized by an age range. Invoking rapid rotation of the population of cluster members characterized by a single age also allows us to match the observed data in detail. However, when taking into account the extent of the red clump in color-magnitude space, we encounter an apparent conflict for NGC 1831 between the age dispersion derived from that based on the extent of the main-sequence turn off and that implied by the compact red clump. We therefore conclude that, for this cluster, variations in stellar rotation rate are preferred over an age dispersion. For NGC 1868, both models perform equally well.« less
NASA Astrophysics Data System (ADS)
Kesseli, Aurora Y.; Muirhead, Philip S.; Mann, Andrew W.; Mace, Greg
2018-06-01
Main-sequence, fully convective M dwarfs in eclipsing binaries are observed to be larger than stellar evolutionary models predict by as much as 10%–15%. A proposed explanation for this discrepancy involves effects from strong magnetic fields, induced by rapid rotation via the dynamo process. Although, a handful of single, slowly rotating M dwarfs with radius measurements from interferometry also appear to be larger than models predict, suggesting that rotation or binarity specifically may not be the sole cause of the discrepancy. We test whether single, rapidly rotating, fully convective stars are also larger than expected by measuring their R\\sin i distribution. We combine photometric rotation periods from the literature with rotational broadening (v\\sin i) measurements reported in this work for a sample of 88 rapidly rotating M dwarf stars. Using a Bayesian framework, we find that stellar evolutionary models underestimate the radii by 10 % {--}15{ % }-2.5+3, but that at higher masses (0.18 < M < 0.4 M Sun), the discrepancy is only about 6% and comparable to results from interferometry and eclipsing binaries. At the lowest masses (0.08 < M < 0.18 M Sun), we find that the discrepancy between observations and theory is 13%–18%, and we argue that the discrepancy is unlikely to be due to effects from age. Furthermore, we find no statistically significant radius discrepancy between our sample and the handful of M dwarfs with interferometric radii. We conclude that neither rotation nor binarity are responsible for the inflated radii of fully convective M dwarfs, and that all fully convective M dwarfs are larger than models predict.
A New Photometric Study of Ap and Am Stars in the Infrared
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, P. S.; Liu, J. Y.; Shan, H. G., E-mail: chenps@ynao.ac.cn
In this paper, 426 well known confirmed Ap and Am stars are photometrically studied in the infrared. The 2MASS, Wide-field Infrared Survey Explorer ( WISE ), and IRAS data are employed to make analyses. The results in this paper have shown that in the 1–3 μ m region over 90% Ap and Am stars have no or little infrared excesses, and infrared radiations in the near-infrared from these stars are probably dominated by the free–free emissions. It is also shown that in the 3–12 μ m region, the majority of Ap stars and Am stars have very similar behavior, i.e.,more » in the W 1– W 2 (3.4–4.6 μ m) region, over half of Ap and Am stars have clear infrared excesses, which are possibly due to the binarity, the multiplicity, and/or the debris disk, but in the W 2– W 3 (4.6–12 μ m) region they have no or little infrared excess. In addition, in the 12–22 μ m region, some of Ap stars and Am stars show the infrared excesses and infrared radiations for these Ap and Am stars are probably due to the free–free emissions. In addition, it is seen that the probability of being the binarity, the multiplicity and/or the debris disk for Am stars is much higher than that for Ap stars. Furthermore, it can be seen that, in general, no relations can be found between infrared colors and spectral types either for Ap stars or for Am stars.« less
Nonlinear filtering for character recognition in low quality document images
NASA Astrophysics Data System (ADS)
Diaz-Escobar, Julia; Kober, Vitaly
2014-09-01
Optical character recognition in scanned printed documents is a well-studied task, where the captured conditions like sheet position, illumination, contrast and resolution are controlled. Nowadays, it is more practical to use mobile devices for document capture than a scanner. So as a consequence, the quality of document images is often poor owing to presence of geometric distortions, nonhomogeneous illumination, low resolution, etc. In this work we propose to use multiple adaptive nonlinear composite filters for detection and classification of characters. Computer simulation results obtained with the proposed system are presented and discussed.
Standard Health Level Seven for Odontological Digital Imaging
Abril-Gonzalez, Mauricio; Portilla, Fernando A.
2017-01-01
Abstract Background: A guide for the implementation of dental digital imaging reports was developed and validated through the International Standard of Health Informatics–Health Level Seven (HL7), achieving interoperability with an electronic system that keeps dental records. Introduction: Digital imaging benefits patients, who can view previous close-ups of dental examinations; providers, because of greater efficiency in managing information; and insurers, because of improved accessibility, patient monitoring, and more efficient cost management. Finally, imaging is beneficial for the dentist who can be more agile in the diagnosis and treatment of patients using this tool. Materials and Methods: The guide was developed under the parameters of an HL7 standard. It was necessary to create a group of dentists and three experts in information and communication technologies from different institutions. Discussion: Diagnostic images scanned with conventional radiology or from a radiovisiograph can be converted to Digital Imaging and Communications in Medicine (DICOM) format, while also retaining patient information. The guide shows how the information of the health record of the patient and the information of the dental image could be standardized in a Clinical Dental Record document using international informatics standard like HL7-V3-CDA document (dental document Level 2). Since it is an informatics standardized document, it could be sent, stored, or displayed using different devices—personal computers or mobile devices—independent of the platform used. Conclusions: Interoperability using dental images and dental record systems reduces adverse events, increases security for the patient, and makes more efficient use of resources. This article makes a contribution to the field of telemedicine in dental informatics. In addition to that, the results could be a reference for projects of electronic medical records when the dental documents are part of them. PMID:27248059
Standard Health Level Seven for Odontological Digital Imaging.
Abril-Gonzalez, Mauricio; Portilla, Fernando A; Jaramillo-Mejia, Marta C
2017-01-01
A guide for the implementation of dental digital imaging reports was developed and validated through the International Standard of Health Informatics-Health Level Seven (HL7), achieving interoperability with an electronic system that keeps dental records. Digital imaging benefits patients, who can view previous close-ups of dental examinations; providers, because of greater efficiency in managing information; and insurers, because of improved accessibility, patient monitoring, and more efficient cost management. Finally, imaging is beneficial for the dentist who can be more agile in the diagnosis and treatment of patients using this tool. The guide was developed under the parameters of an HL7 standard. It was necessary to create a group of dentists and three experts in information and communication technologies from different institutions. Diagnostic images scanned with conventional radiology or from a radiovisiograph can be converted to Digital Imaging and Communications in Medicine (DICOM) format, while also retaining patient information. The guide shows how the information of the health record of the patient and the information of the dental image could be standardized in a Clinical Dental Record document using international informatics standard like HL7-V3-CDA document (dental document Level 2). Since it is an informatics standardized document, it could be sent, stored, or displayed using different devices-personal computers or mobile devices-independent of the platform used. Interoperability using dental images and dental record systems reduces adverse events, increases security for the patient, and makes more efficient use of resources. This article makes a contribution to the field of telemedicine in dental informatics. In addition to that, the results could be a reference for projects of electronic medical records when the dental documents are part of them.
Digital authentication with copy-detection patterns
NASA Astrophysics Data System (ADS)
Picard, Justin
2004-06-01
Technologies for making high-quality copies of documents are getting more available, cheaper, and more efficient. As a result, the counterfeiting business engenders huge losses, ranging to 5% to 8% of worldwide sales of brand products, and endangers the reputation and value of the brands themselves. Moreover, the growth of the Internet drives the business of counterfeited documents (fake IDs, university diplomas, checks, and so on), which can be bought easily and anonymously from hundreds of companies on the Web. The incredible progress of digital imaging equipment has put in question the very possibility of verifying the authenticity of documents: how can we discern genuine documents from seemingly "perfect" copies? This paper proposes a solution based on creating digital images with specific properties, called a Copy-detection patterns (CDP), that is printed on arbitrary documents, packages, etc. CDPs make an optimal use of an "information loss principle": every time an imae is printed or scanned, some information is lost about the original digital image. That principle applies even for the highest quality scanning, digital imaging, printing or photocopying equipment today, and will likely remain true for tomorrow. By measuring the amount of information contained in a scanned CDP, the CDP detector can take a decision on the authenticity of the document.
Spotting words in handwritten Arabic documents
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Srinivasan, Harish; Babu, Pavithra; Bhole, Chetan
2006-01-01
The design and performance of a system for spotting handwritten Arabic words in scanned document images is presented. Three main components of the system are a word segmenter, a shape based matcher for words and a search interface. The user types in a query in English within a search window, the system finds the equivalent Arabic word, e.g., by dictionary look-up, locates word images in an indexed (segmented) set of documents. A two-step approach is employed in performing the search: (1) prototype selection: the query is used to obtain a set of handwritten samples of that word from a known set of writers (these are the prototypes), and (2) word matching: the prototypes are used to spot each occurrence of those words in the indexed document database. A ranking is performed on the entire set of test word images-- where the ranking criterion is a similarity score between each prototype word and the candidate words based on global word shape features. A database of 20,000 word images contained in 100 scanned handwritten Arabic documents written by 10 different writers was used to study retrieval performance. Using five writers for providing prototypes and the other five for testing, using manually segmented documents, 55% precision is obtained at 50% recall. Performance increases as more writers are used for training.
Parallel processing considerations for image recognition tasks
NASA Astrophysics Data System (ADS)
Simske, Steven J.
2011-01-01
Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.
Comparison of approaches for mobile document image analysis using server supported smartphones
NASA Astrophysics Data System (ADS)
Ozarslan, Suleyman; Eren, P. Erhan
2014-03-01
With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.
ERIC Educational Resources Information Center
Villano, Matt
2006-01-01
More and more colleges and universities today have discovered electronic record-keeping and record-sharing, made possible by document imaging technology. Across the country, schools such as Monmouth University (New Jersey), Washington State University, the University of Idaho, and Towson University (Maryland) are embracing document imaging. Yet…
Image segmentation evaluation for very-large datasets
NASA Astrophysics Data System (ADS)
Reeves, Anthony P.; Liu, Shuang; Xie, Yiting
2016-03-01
With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.
NASA Technical Reports Server (NTRS)
1988-01-01
The charters of Freedom Monitoring System will periodically assess the physical condition of the U.S. Constitution, Declaration of Independence and Bill of Rights. Although protected in helium filled glass cases, the documents are subject to damage from light vibration and humidity. The photometer is a CCD detector used as the electronic film for the camera system's scanning camera which mechanically scans the document line by line and acquires a series of images, each representing a one square inch portion of the document. Perkin-Elmer Corporation's photometer is capable of detecting changes in contrast, shape or other indicators of degradation with 5 to 10 times the sensitivity of the human eye. A Vicom image processing computer receives the data from the photometer stores it and manipulates it, allowing comparison of electronic images over time to detect changes.
Structured Forms Reference Set of Binary Images II (SFRS2)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images II (SFRS2) (Web, free access) The second NIST database of structured forms (Special Database 6) consists of 5,595 pages of binary, black-and-white images of synthesized documents containing hand-print. The documents in this database are 12 different tax forms with the IRS 1040 Package X for the year 1988.
Matsuo, Toshihiko; Gochi, Akira; Hirakawa, Tsuyoshi; Ito, Tadashi; Kohno, Yoshihisa
2010-10-01
General electronic medical records systems remain insufficient for ophthalmology outpatient clinics from the viewpoint of dealing with many ophthalmic examinations and images in a large number of patients. Filing systems for documents and images by Yahgee Document View (Yahgee, Inc.) were introduced on the platform of general electronic medical records system (Fujitsu, Inc.). Outpatients flow management system and electronic medical records system for ophthalmology were constructed. All images from ophthalmic appliances were transported to Yahgee Image by the MaxFile gateway system (P4 Medic, Inc.). The flow of outpatients going through examinations such as visual acuity testing were monitored by the list "Ophthalmology Outpatients List" by Yahgee Workflow in addition to the list "Patients Reception List" by Fujitsu. Patients' identification number was scanned with bar code readers attached to ophthalmic appliances. Dual monitors were placed in doctors' rooms to show Fujitsu Medical Records on the left-hand monitor and ophthalmic charts of Yahgee Document on the right-hand monitor. The data of manually-inputted visual acuity, automatically-exported autorefractometry and non-contact tonometry on a new template, MaxFile ED, were again automatically transported to designated boxes on ophthalmic charts of Yahgee Document. Images such as fundus photographs, fluorescein angiograms, optical coherence tomographic and ultrasound scans were viewed by Yahgee Image, and were copy-and-pasted to assigned boxes on the ophthalmic charts. Ordering such as appointments, drug prescription, fees and diagnoses input, central laboratory tests, surgical theater and ward room reservations were placed by functions of the Fujitsu electronic medical records system. The combination of the Fujitsu electronic medical records and Yahgee Document View systems enabled the University Hospital to examine the same number of outpatients as prior to the implementation of the computerized filing system.
Large-Scale Document Automation: The Systems Integration Issue.
ERIC Educational Resources Information Center
Kalthoff, Robert J.
1985-01-01
Reviews current technologies for electronic imaging and its recording and transmission, including digital recording, optical data disks, automated image-delivery micrographics, high-density-magnetic recording, and new developments in telecommunications and computers. The role of the document automation systems integrator, who will bring these…
Patient-generated Digital Images after Pediatric Ambulatory Surgery.
Miller, Matthew W; Ross, Rachael K; Voight, Christina; Brouwer, Heather; Karavite, Dean J; Gerber, Jeffrey S; Grundmeier, Robert W; Coffin, Susan E
2016-07-06
To describe the use of digital images captured by parents or guardians and sent to clinicians for assessment of wounds after pediatric ambulatory surgery. Subjects with digital images of post-operative wounds were identified as part of an on-going cohort study of infections after ambulatory surgery within a large pediatric healthcare system. We performed a structured review of the electronic health record (EHR) to determine how digital images were documented in the EHR and used in clinical care. We identified 166 patients whose parent or guardian reported sending a digital image of the wound to the clinician after surgery. A corresponding digital image was located in the EHR in only 121 of these encounters. A change in clinical management was documented in 20% of these encounters, including referral for in-person evaluation of the wound and antibiotic prescription. Clinical teams have developed ad hoc workflows to use digital images to evaluate post-operative pediatric surgical patients. Because the use of digital images to support follow-up care after ambulatory surgery is likely to increase, it is important that high-quality images are captured and documented appropriately in the EHR to ensure privacy, security, and a high-level of care.
Patient-Generated Digital Images after Pediatric Ambulatory Surgery
Ross, Rachael K.; Voight, Christina; Brouwer, Heather; Karavite, Dean J.; Gerber, Jeffrey S.; Grundmeier, Robert W.; Coffin, Susan E.
2016-01-01
Summary Objective To describe the use of digital images captured by parents or guardians and sent to clinicians for assessment of wounds after pediatric ambulatory surgery. Methods Subjects with digital images of post-operative wounds were identified as part of an ongoing cohort study of infections after ambulatory surgery within a large pediatric healthcare system. We performed a structured review of the electronic health record (EHR) to determine how digital images were documented in the EHR and used in clinical care. Results We identified 166 patients whose parent or guardian reported sending a digital image of the wound to the clinician after surgery. A corresponding digital image was located in the EHR in only 121 of these encounters. A change in clinical management was documented in 20% of these encounters, including referral for in-person evaluation of the wound and antibiotic prescription. Conclusion Clinical teams have developed ad hoc workflows to use digital images to evaluate post-operative pediatric surgical patients. Because the use of digital images to support follow-up care after ambulatory surgery is likely to increase, it is important that high-quality images are captured and documented appropriately in the EHR to ensure privacy, security, and a high-level of care. PMID:27452477
Fast words boundaries localization in text fields for low quality document images
NASA Astrophysics Data System (ADS)
Ilin, Dmitry; Novikov, Dmitriy; Polevoy, Dmitry; Nikolaev, Dmitry
2018-04-01
The paper examines the problem of word boundaries precise localization in document text zones. Document processing on a mobile device consists of document localization, perspective correction, localization of individual fields, finding words in separate zones, segmentation and recognition. While capturing an image with a mobile digital camera under uncontrolled capturing conditions, digital noise, perspective distortions or glares may occur. Further document processing gets complicated because of its specifics: layout elements, complex background, static text, document security elements, variety of text fonts. However, the problem of word boundaries localization has to be solved at runtime on mobile CPU with limited computing capabilities under specified restrictions. At the moment, there are several groups of methods optimized for different conditions. Methods for the scanned printed text are quick but limited only for images of high quality. Methods for text in the wild have an excessively high computational complexity, thus, are hardly suitable for running on mobile devices as part of the mobile document recognition system. The method presented in this paper solves a more specialized problem than the task of finding text on natural images. It uses local features, a sliding window and a lightweight neural network in order to achieve an optimal algorithm speed-precision ratio. The duration of the algorithm is 12 ms per field running on an ARM processor of a mobile device. The error rate for boundaries localization on a test sample of 8000 fields is 0.3
The paper crisis: from hospitals to medical practices.
Park, Gregory; Neaveill, Rodney S
2009-01-01
Hospitals, not unlike physician practices, are faced with an increasing burden of managing piles of hard copy documents including insurance forms, requests for information, and advance directives. Healthcare organizations are moving to transform paper-based forms and documents into digitized files in order to save time and money and to have those documents available at a moment's notice. The cost of these document management/imaging systems can be easily justified with the significant savings of resources realized from the implementation of these systems. This article illustrates the enormity of the "paper problem" in healthcare and outlines just a few of the required processes that could be improved with the use of automated document management/imaging systems.
Storing and Viewing Electronic Documents.
ERIC Educational Resources Information Center
Falk, Howard
1999-01-01
Discusses the conversion of fragile library materials to computer storage and retrieval to extend the life of the items and to improve accessibility through the World Wide Web. Highlights include entering the images, including scanning; optical character recognition; full text and manual indexing; and available document- and image-management…
Document Indexing for Image-Based Optical Information Systems.
ERIC Educational Resources Information Center
Thiel, Thomas J.; And Others
1991-01-01
Discussion of image-based information retrieval systems focuses on indexing. Highlights include computerized information retrieval; multimedia optical systems; optical mass storage and personal computers; and a case study that describes an optical disk system which was developed to preserve, access, and disseminate military documents. (19…
Case retrieval in medical databases by fusing heterogeneous information.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice
2011-01-01
A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
Choosing a Scanner: Points To Consider before Buying a Scanner.
ERIC Educational Resources Information Center
Raby, Chris
1998-01-01
Outlines ten factors to consider before buying a scanner: size of document; type of document; color; speed and volume; resolution; image enhancement; image compression; optical character recognition; scanning subsystem; and the option to use a commercial bureau service. The importance of careful analysis of requirements is emphasized. (AEF)
Illinois Occupational Skill Standards: Imaging/Pre-Press Cluster.
ERIC Educational Resources Information Center
Illinois Occupational Skill Standards and Credentialing Council, Carbondale.
This document, which is intended as a guide for work force preparation program providers, details the Illinois occupational skill standards for programs preparing students for employment in occupations in the imaging/pre-press cluster. The document begins with a brief overview of the Illinois perspective on occupational skill standards and…
iPhone 4s and iPhone 5s Imaging of the Eye.
Jalil, Maaz; Ferenczy, Sandor R; Shields, Carol L
2017-01-01
To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable.
Analysis of line structure in handwritten documents using the Hough transform
NASA Astrophysics Data System (ADS)
Ball, Gregory R.; Kasiviswanathan, Harish; Srihari, Sargur N.; Narayanan, Aswin
2010-01-01
In the analysis of handwriting in documents a central task is that of determining line structure of the text, e.g., number of text lines, location of their starting and end-points, line-width, etc. While simple methods can handle ideal images, real world documents have complexities such as overlapping line structure, variable line spacing, line skew, document skew, noisy or degraded images etc. This paper explores the application of the Hough transform method to handwritten documents with the goal of automatically determining global document line structure in a top-down manner which can then be used in conjunction with a bottom-up method such as connected component analysis. The performance is significantly better than other top-down methods, such as the projection profile method. In addition, we evaluate the performance of skew analysis by the Hough transform on handwritten documents.
Text-image alignment for historical handwritten documents
NASA Astrophysics Data System (ADS)
Zinger, S.; Nerbonne, J.; Schomaker, L.
2009-01-01
We describe our work on text-image alignment in context of building a historical document retrieval system. We aim at aligning images of words in handwritten lines with their text transcriptions. The images of handwritten lines are automatically segmented from the scanned pages of historical documents and then manually transcribed. To train automatic routines to detect words in an image of handwritten text, we need a training set - images of words with their transcriptions. We present our results on aligning words from the images of handwritten lines and their corresponding text transcriptions. Alignment based on the longest spaces between portions of handwriting is a baseline. We then show that relative lengths, i.e. proportions of words in their lines, can be used to improve the alignment results considerably. To take into account the relative word length, we define the expressions for the cost function that has to be minimized for aligning text words with their images. We apply right to left alignment as well as alignment based on exhaustive search. The quality assessment of these alignments shows correct results for 69% of words from 100 lines, or 90% of partially correct and correct alignments combined.
TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, K; Mutic, S
2014-06-15
AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include themore » following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image registration.« less
Selective document image data compression technique
Fu, C.Y.; Petrich, L.I.
1998-05-19
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.
Novel computer-based endoscopic camera
NASA Astrophysics Data System (ADS)
Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia
1995-05-01
We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.
Selective document image data compression technique
Fu, Chi-Yung; Petrich, Loren I.
1998-01-01
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)
Towards Mobile OCR: How To Take a Good Picture of a Document Without Sight.
Cutter, Michael; Manduchi, Roberto
The advent of mobile OCR (optical character recognition) applications on regular smartphones holds great promise for enabling blind people to access printed information. Unfortunately, these systems suffer from a problem: in order for OCR output to be meaningful, a well-framed image of the document needs to be taken, something that is difficult to do without sight. This contribution presents an experimental investigation of how blind people position and orient a camera phone while acquiring document images. We developed experimental software to investigate if verbal guidance aids in the acquisition of OCR-readable images without sight. We report on our participant's feedback and performance before and after assistance from our software.
Towards Mobile OCR: How To Take a Good Picture of a Document Without Sight
Cutter, Michael; Manduchi, Roberto
2015-01-01
The advent of mobile OCR (optical character recognition) applications on regular smartphones holds great promise for enabling blind people to access printed information. Unfortunately, these systems suffer from a problem: in order for OCR output to be meaningful, a well-framed image of the document needs to be taken, something that is difficult to do without sight. This contribution presents an experimental investigation of how blind people position and orient a camera phone while acquiring document images. We developed experimental software to investigate if verbal guidance aids in the acquisition of OCR-readable images without sight. We report on our participant's feedback and performance before and after assistance from our software. PMID:26677461
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-10
... motor carrier of a scanned image of the original record; the driver would retain the original while the carrier maintains the electronic scanned electronic image along with any supporting documents. [[Page... plans to implement a new approach for receiving and processing RODS. Its drivers would complete their...
Multispectral image restoration of historical documents based on LAAMs and mathematical morphology
NASA Astrophysics Data System (ADS)
Lechuga-S., Edwin; Valdiviezo-N., Juan C.; Urcid, Gonzalo
2014-09-01
This research introduces an automatic technique designed for the digital restoration of the damaged parts in historical documents. For this purpose an imaging spectrometer is used to acquire a set of images in the wavelength interval from 400 to 1000 nm. Assuming the presence of linearly mixed spectral pixels registered from the multispectral image, our technique uses two lattice autoassociative memories to extract the set of pure pigments conforming a given document. Through an spectral unmixing analysis, our method produces fractional abundance maps indicating the distributions of each pigment in the scene. These maps are then used to locate cracks and holes in the document under study. The restoration process is performed by the application of a region filling algorithm, based on morphological dilation, followed by a color interpolation to restore the original appearance of the filled areas. This procedure has been successfully applied to the analysis and restoration of three multispectral data sets: two corresponding to artificially superimposed scripts and a real data acquired from a Mexican pre-Hispanic codex, whose restoration results are presented.
[Development of an ophthalmological clinical information system for inpatient eye clinics].
Kortüm, K U; Müller, M; Babenko, A; Kampik, A; Kreutzer, T C
2015-12-01
In times of increased digitalization in healthcare, departments of ophthalmology are faced with the challenge of introducing electronic clinical health records (EHR); however, specialized software for ophthalmology is not available with most major EHR sytems. The aim of this project was to create specific ophthalmological user interfaces for large inpatient eye care providers within a hospitalwide EHR. Additionally the integration of ophthalmic imaging systems, scheduling and surgical documentation should be achieved. The existing EHR i.s.h.med (Siemens, Germany) was modified using advanced business application programming (ABAP) language to create specific ophthalmological user interfaces for reproduction and moreover optimization of the clinical workflow. A user interface for documentation of ambulatory patients with eight tabs was designed. From June 2013 to October 2014 a total of 61,551 patient contact details were documented. For surgical documentation a separate user interface was set up. Digital clinical orders for documentation of registration and scheduling of operations user interfaces were also set up. A direct integration of ophthalmic imaging modalities could be established. An ophthalmologist-orientated EHR for outpatient and surgical documentation for inpatient clinics was created and successfully implemented. By incorporation of imaging procedures the foundation of future smart/big data analyses was created.
Commercial applications for optical data storage
NASA Astrophysics Data System (ADS)
Tas, Jeroen
1991-03-01
Optical data storage has spurred the market for document imaging systems. These systems are increasingly being used to electronically manage the processing, storage and retrieval of documents. Applications range from straightforward archives to sophisticated workflow management systems. The technology is developing rapidly and within a few years optical imaging facilities will be incorporated in most of the office information systems. This paper gives an overview of the status of the market, the applications and the trends of optical imaging systems.
Correcting geometric and photometric distortion of document images on a smartphone
NASA Astrophysics Data System (ADS)
Simon, Christian; Williem; Park, In Kyu
2015-01-01
A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.
Boost OCR accuracy using iVector based system combination approach
NASA Astrophysics Data System (ADS)
Peng, Xujun; Cao, Huaigu; Natarajan, Prem
2015-01-01
Optical character recognition (OCR) is a challenging task because most existing preprocessing approaches are sensitive to writing style, writing material, noises and image resolution. Thus, a single recognition system cannot address all factors of real document images. In this paper, we describe an approach to combine diverse recognition systems by using iVector based features, which is a newly developed method in the field of speaker verification. Prior to system combination, document images are preprocessed and text line images are extracted with different approaches for each system, where iVector is transformed from a high-dimensional supervector of each text line and is used to predict the accuracy of OCR. We merge hypotheses from multiple recognition systems according to the overlap ratio and the predicted OCR score of text line images. We present evaluation results on an Arabic document database where the proposed method is compared against the single best OCR system using word error rate (WER) metric.
iPhone 4s and iPhone 5s Imaging of the Eye
Jalil, Maaz; Ferenczy, Sandor R.; Shields, Carol L.
2017-01-01
Background/Aims To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. Methods A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. Results In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. Conclusions iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable. PMID:28275604
10 CFR 2.1013 - Use of the electronic docket during the proceeding.
Code of Federal Regulations, 2010 CFR
2010-01-01
... bi-tonal documents. (v) Electronic submissions must be generated in the appropriate PDF output format by using: (A) PDF—Formatted Text and Graphics for textual documents converted from native applications; (B) PDF—Searchable Image (Exact) for textual documents converted from scanned documents; and (C...
Content Recognition and Context Modeling for Document Analysis and Retrieval
ERIC Educational Resources Information Center
Zhu, Guangyu
2009-01-01
The nature and scope of available documents are changing significantly in many areas of document analysis and retrieval as complex, heterogeneous collections become accessible to virtually everyone via the web. The increasing level of diversity presents a great challenge for document image content categorization, indexing, and retrieval.…
New concept high-speed and high-resolution color scanner
NASA Astrophysics Data System (ADS)
Nakashima, Keisuke; Shinoda, Shin'ichi; Konishi, Yoshiharu; Sugiyama, Kenji; Hori, Tetsuya
2003-05-01
We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing. Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security. We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.
Texture for script identification.
Busch, Andrew; Boles, Wageeh W; Sridharan, Sridha
2005-11-01
The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.
VizieR Online Data Catalog: 5yr radial velocity measurements of 19 Cepheids (Anderson+, 2016)
NASA Astrophysics Data System (ADS)
Anderson, R. I.; Casertano, S.; Riess, A. G.; Melis, C.; Holl, B.; Semaan, T.; Papics, P. I.; Blanco-Cuaresma, S.; Eyer, L.; Mowlavi, N.; Palaversa, L.; Roelens, M.
2016-11-01
We here present a detailed investigation of spectroscopic binarity of the 19 Cepheids for which HST/WFC3 spatial scan parallaxes are being recorded (Riess+ 2014ApJ...785..161R; Casertano+ 2016ApJ...825...11C). We have secured time-series observations from three different high-resolution echelle spectrographs: Coralie (R~60000) at the Swiss 1.2m Euler telescope located at La Silla Observatory, Chile; Hermes (R~85000) at the Flemish 1.2m Mercator telescope located at the Roque de los Muchachos Observatory on La Palma, Canary Islands, Spain; Hamilton (R~60000) at the 3m Shane telescope located at Lick Observatory, California, USA. (8 data files).
Arnold, Corey W; Bui, Alex A T; Morioka, Craig; El-Saden, Suzie; Kangarloo, Hooshang
2007-01-01
The communication of imaging findings to a referring physician is an important role of the radiologist. However, communication between onsite and offsite physicians is a time-consuming process that can obstruct work flow and frequently involves no exchange of visual information, which is especially problematic given the importance of radiologic images for diagnosis and treatment. A prototype World Wide Web-based image documentation and reporting system was developed for use in supporting a "communication loop" that is based on the concept of a classic "wet-read" system. The proposed system represents an attempt to address many of the problems seen in current communication work flows by implementing a well-documented and easily accessible communication loop that is adaptable to different types of imaging study evaluation. Images are displayed in a native (DICOM) Digital Imaging and Communications in Medicine format with a Java applet, which allows accurate presentation along with use of various image manipulation tools. The Web-based infrastructure consists of a server that stores imaging studies and reports, with Web browsers that download and install necessary client software on demand. Application logic consists of a set of PHP (hypertext preprocessor) modules that are accessible with an application programming interface. The system may be adapted to any clinician-specialist communication loop, and, because it integrates radiologic standards with Web-based technologies, can more effectively communicate and document imaging data. RSNA, 2007
Adaptive optics imaging of geographic atrophy.
Gocho, Kiyoko; Sarda, Valérie; Falah, Sabrina; Sahel, José-Alain; Sennlaub, Florian; Benchaboune, Mustapha; Ullern, Martine; Paques, Michel
2013-05-01
To report the findings of en face adaptive optics (AO) near infrared (NIR) reflectance fundus flood imaging in eyes with geographic atrophy (GA). Observational clinical study of AO NIR fundus imaging was performed in 12 eyes of nine patients with GA, and in seven controls using a flood illumination camera operating at 840 nm, in addition to routine clinical examination. To document short term and midterm changes, AO imaging sessions were repeated in four patients (mean interval between sessions 21 days; median follow up 6 months). As compared with scanning laser ophthalmoscope imaging, AO NIR imaging improved the resolution of the changes affecting the RPE. Multiple hyporeflective clumps were seen within and around GA areas. Time-lapse imaging revealed micrometric-scale details of the emergence and progression of areas of atrophy as well as the complex kinetics of some hyporeflective clumps. Such dynamic changes were observed within as well as outside atrophic areas. in eyes affected by GA, AO nir imaging allows high resolution documentation of the extent of RPE damage. this also revealed that a complex, dynamic process of redistribution of hyporeflective clumps throughout the posterior pole precedes and accompanies the emergence and progression of atrophy. therefore, these clumps are probably also a biomarker of rpe damage. AO NIR imaging may, therefore, be of interest to detect the earliest stages, to document the retinal pathology and to monitor the progression oF GA. (ClinicalTrials.gov number, NCT01546181.).
Reading and Writing in the 21st Century.
ERIC Educational Resources Information Center
Soloway, Elliot; And Others
1993-01-01
Describes MediaText, a multimedia document processor developed at the University of Michigan that allows the incorporation of video, music, sound, animations, still images, and text into one document. Interactive documents are discussed, and the need for users to be able to write documents as well as read them is emphasized. (four references) (LRW)
Embedding the shapes of regions of interest into a Clinical Document Architecture document.
Minh, Nguyen Hai; Yi, Byoung-Kee; Kim, Il Kon; Song, Joon Hyun; Binh, Pham Viet
2015-03-01
Sharing a medical image visually annotated by a region of interest with a remotely located specialist for consultation is a good practice. It may, however, require a special-purpose (and most likely expensive) system to send and view them, which is an unfeasible solution in developing countries such as Vietnam. In this study, we design and implement interoperable methods based on the HL7 Clinical Document Architecture and the eXtensible Markup Language Stylesheet Language for Transformation standards to seamlessly exchange and visually present the shapes of regions of interest using web browsers. We also propose a new integration architecture for a Clinical Document Architecture generator that enables embedding of regions of interest and simultaneous auto-generation of corresponding style sheets. Using the Clinical Document Architecture document and style sheet, a sender can transmit clinical documents and medical images together with coordinate values of regions of interest to recipients. Recipients can easily view the documents and display embedded regions of interest by rendering them in their web browser of choice. © The Author(s) 2014.
Girolami, Antonio; Napolitano, Fabio; Faraone, Daniela; Di Bello, Gerardo; Braghieri, Ada
2014-01-01
The object of the investigation was the Lucanian dry sausage appearance, meant as color and visible fat ratio. The study was carried out on dry sausages produced in 10 different salami factories and seasoned for 18 days on average. We studied the effect of the raw material origin (5 producers used meat bought from the market and other 5 producers used meat from pigs bred in their farms) and of the salami factories or brands on meat color, fat color and visible fat ratio in dry sausages. The sausages slices were photographed and the images were analysed with the computer vision system to measure the changes in the colorimetric characteristics L*, a*, b*, hue and chroma and in the visible fat area ratio. The last parameter was assessed on the slice surface using image binarization. A consumer test was conducted to determine the relationship between the perception of visible fat on the sausage slice surface and acceptability and preference of this product. The consumers were asked to look carefully at the 6 sausages slices in a photo, minding the presence of fat, and to identify (a) the slices they considered unacceptable for consumption and (b) the slice they preferred. The results show that the color of the sausage lean part varies in relation to the raw material employed and to the producer or brand (P<0.001). Besides, the sausage meat color is not uniform in some salami factories (P<0.05-0.001). In all salami factories the sausages show a high uniformity in fat color. The visible fat ratio of the sausages slices is higher (P<0.001) in the product from salami factories without pig-breeding farm. The fat percentage is highly variable (P<0.001) among the sausages of each salami factory. On the whole, the product the consumers consider acceptable and is inclined to eat has a low fat percentage (P<0.001). Our consumers (about 70%) prefer slices which are leaner (P<0.001). Women, in particular, show a higher preference for the leanest (P<0.001). © 2013.
Web Mining for Web Image Retrieval.
ERIC Educational Resources Information Center
Chen, Zheng; Wenyin, Liu; Zhang, Feng; Li, Mingjing; Zhang, Hongjiang
2001-01-01
Presents a prototype system for image retrieval from the Internet using Web mining. Discusses the architecture of the Web image retrieval prototype; document space modeling; user log mining; and image retrieval experiments to evaluate the proposed system. (AEF)
36 CFR § 1238.14 - What are the microfilming requirements for permanent and unscheduled records?
Code of Federal Regulations, 2013 CFR
2013-07-01
... processing procedures in ANSI/AIIM MS1 and ANSI/AIIM MS23 (both incorporated by reference, see § 1238.5). (d... reference, see § 1238.5). (2) Background density of images. Agencies must use the background ISO standard... densities for images of documents are as follows: Classification Description of document Background density...
ERIC Educational Resources Information Center
van Boxtel, Carla; van Drie, Jannet
2012-01-01
An important goal of history education is the development of a chronological frame of reference that can be used to interpret and date historical images and documents. Despite the importance of this contextualization goal, little is known about the knowledge and strategies that allow students to situate information historically. Two studies were…
ERIC Educational Resources Information Center
Schwartz, Stanley F.
This publication introduces electronic document imaging systems and provides guidance for local governments in New York in deciding whether such systems should be adopted for their own records and information management purposes. It advises local governments on how to develop plans for using such technology by discussing its advantages and…
Facing the Limitations of Electronic Document Handling.
ERIC Educational Resources Information Center
Moralee, Dennis
1985-01-01
This essay addresses problems associated with technology used in the handling of high-resolution visual images in electronic document delivery. Highlights include visual fidelity, laser-driven optical disk storage, electronics versus micrographics for document storage, videomicrographics, and system configurations and peripherals. (EJS)
Use of Image Based Modelling for Documentation of Intricately Shaped Objects
NASA Astrophysics Data System (ADS)
Marčiš, M.; Barták, P.; Valaška, D.; Fraštia, M.; Trhan, O.
2016-06-01
In the documentation of cultural heritage, we can encounter three dimensional shapes and structures which are complicated to measure. Such objects are for example spiral staircases, timber roof trusses, historical furniture or folk costume where it is nearly impossible to effectively use the traditional surveying or the terrestrial laser scanning due to the shape of the object, its dimensions and the crowded environment. The actual methods of digital photogrammetry can be very helpful in such cases with the emphasis on the automated processing of the extensive image data. The created high resolution 3D models and 2D orthophotos are very important for the documentation of architectural elements and they can serve as an ideal base for the vectorization and 2D drawing documentation. This contribution wants to describe the various usage of image based modelling in specific interior spaces and specific objects. The advantages and disadvantages of the photogrammetric measurement of such objects in comparison to other surveying methods are reviewed.
Badano, Luigi P; Kolias, Theodore J; Muraru, Denisa; Abraham, Theodore P; Aurigemma, Gerard; Edvardsen, Thor; D'Hooge, Jan; Donal, Erwan; Fraser, Alan G; Marwick, Thomas; Mertens, Luc; Popescu, Bogdan A; Sengupta, Partho P; Lancellotti, Patrizio; Thomas, James D; Voigt, Jens-Uwe
2018-03-27
The EACVI/ASE/Industry Task Force to standardize deformation imaging prepared this consensus document to standardize definitions and techniques for using two-dimensional (2D) speckle tracking echocardiography (STE) to assess left atrial, right ventricular, and right atrial myocardial deformation. This document is intended for both the technical engineering community and the clinical community at large to provide guidance on selecting the functional parameters to measure and how to measure them using 2D STE.This document aims to represent a significant step forward in the collaboration between the scientific societies and the industry since technical specifications of the software packages designed to post-process echocardiographic datasets have been agreed and shared before their actual development. Hopefully, this will lead to more clinically oriented software packages which will be better tailored to clinical needs and will allow industry to save time and resources in their development.
Facades structure detection by geometric moment
NASA Astrophysics Data System (ADS)
Jiang, Diqiong; Chen, Hui; Song, Rui; Meng, Lei
2017-06-01
This paper proposes a novel method for extracting facades structure from real-world pictures by using local geometric moment. Compared with existing methods, the proposed method has advantages of easy-to-implement, low computational cost, and robustness to noises, such as uneven illumination, shadow, and shade from other objects. Besides, our method is faster and has a lower space complexity, making it feasible for mobile devices and the situation where real-time data processing is required. Specifically, a facades structure modal is first proposed to support the use of our special noise reduction method, which is based on a self-adapt local threshold with Gaussian weighted average for image binarization processing and the feature of the facades structure. Next, we divide the picture of the building into many individual areas, each of which represents a door or a window in the picture. Subsequently we calculate the geometric moment and centroid for each individual area, for identifying those collinear ones based on the feature vectors, each of which is thereafter replaced with a line. Finally, we comprehensively analyze all the geometric moment and centroid to find out the facades structure of the building. We compare our result with other methods and especially report the result from the pictures taken in bad environmental conditions. Our system is designed for two application, i.e, the reconstruction of facades based on higher resolution ground-based on imagery, and the positional system based on recognize the urban building.
Szigeti, Krisztián; Szabó, Tibor; Korom, Csaba; Czibak, Ilona; Horváth, Ildikó; Veres, Dániel S; Gyöngyi, Zoltán; Karlinger, Kinga; Bergmann, Ralf; Pócsik, Márta; Budán, Ferenc; Máthé, Domokos
2016-02-11
Lung diseases (resulting from air pollution) require a widely accessible method for risk estimation and early diagnosis to ensure proper and responsive treatment. Radiomics-based fractal dimension analysis of X-ray computed tomography attenuation patterns in chest voxels of mice exposed to different air polluting agents was performed to model early stages of disease and establish differential diagnosis. To model different types of air pollution, BALBc/ByJ mouse groups were exposed to cigarette smoke combined with ozone, sulphur dioxide gas and a control group was established. Two weeks after exposure, the frequency distributions of image voxel attenuation data were evaluated. Specific cut-off ranges were defined to group voxels by attenuation. Cut-off ranges were binarized and their spatial pattern was associated with calculated fractal dimension, then abstracted by the fractal dimension -- cut-off range mathematical function. Nonparametric Kruskal-Wallis (KW) and Mann-Whitney post hoc (MWph) tests were used. Each cut-off range versus fractal dimension function plot was found to contain two distinctive Gaussian curves. The ratios of the Gaussian curve parameters are considerably significant and are statistically distinguishable within the three exposure groups. A new radiomics evaluation method was established based on analysis of the fractal dimension of chest X-ray computed tomography data segments. The specific attenuation patterns calculated utilizing our method may diagnose and monitor certain lung diseases, such as chronic obstructive pulmonary disease (COPD), asthma, tuberculosis or lung carcinomas.
Ancient administrative handwritten documents: X-ray analysis and imaging
Albertin, F.; Astolfo, A.; Stampanoni, M.; Peccenini, Eva; Hwu, Y.; Kaplan, F.; Margaritondo, G.
2015-01-01
Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page ‘reading’. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project. PMID:25723946
Ancient administrative handwritten documents: X-ray analysis and imaging.
Albertin, F; Astolfo, A; Stampanoni, M; Peccenini, Eva; Hwu, Y; Kaplan, F; Margaritondo, G
2015-03-01
Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page `reading'. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project.
Faxed document image restoration method based on local pixel patterns
NASA Astrophysics Data System (ADS)
Akiyama, Teruo; Miyamoto, Nobuo; Oguro, Masami; Ogura, Kenji
1998-04-01
A method for restoring degraded faxed document images using the patterns of pixels that construct small areas in a document is proposed. The method effectively restores faxed images that contain the halftone textures and/or density salt-and-pepper noise that degrade OCR system performance. The halftone image restoration process, white-centered 3 X 3 pixels, in which black-and-white pixels alternate, are identified first using the distribution of the pixel values as halftone textures, and then the white center pixels are inverted to black. To remove high-density salt- and-pepper noise, it is assumed that the degradation is caused by ill-balanced bias and inappropriate thresholding of the sensor output which results in the addition of random noise. Restored image can be estimated using an approximation that uses the inverse operation of the assumed original process. In order to process degraded faxed images, the algorithms mentioned above are combined. An experiment is conducted using 24 especially poor quality examples selected from data sets that exemplify what practical fax- based OCR systems cannot handle. The maximum recovery rate in terms of mean square error was 98.8 percent.
Räber, Lorenz; Mintz, Gary S; Koskinas, Konstantinos C; Johnson, Thomas W; Holm, Niels R; Onuma, Yoshinubo; Radu, Maria D; Joner, Michael; Yu, Bo; Jia, Haibo; Menevau, Nicolas; de la Torre Hernandez, Jose M; Escaned, Javier; Hill, Jonathan; Prati, Francesco; Colombo, Antonio; di Mario, Carlo; Regar, Evelyn; Capodanno, Davide; Wijns, William; Byrne, Robert A; Guagliumi, Giulio
2018-05-22
This Consensus Document is the first of two reports summarizing the views of an expert panel organized by the European Association of Percutaneous Cardiovascular Interventions (EAPCI) on the clinical use of intracoronary imaging including intravascular ultrasound (IVUS) and optical coherence tomography (OCT). The first document appraises the role of intracoronary imaging to guide percutaneous coronary interventions (PCIs) in clinical practice. Current evidence regarding the impact of intracoronary imaging guidance on cardiovascular outcomes is summarized, and patients or lesions most likely to derive clinical benefit from an imaging-guided intervention are identified. The relevance of the use of IVUS or OCT prior to PCI for optimizing stent sizing (stent length and diameter) and planning the procedural strategy is discussed. Regarding post-implantation imaging, the consensus group recommends key parameters that characterize an optimal PCI result and provides cut-offs to guide corrective measures and optimize the stenting result. Moreover, routine performance of intracoronary imaging in patients with stent failure (restenosis or stent thrombosis) is recommended. Finally, strengths and limitations of IVUS and OCT for guiding PCI and assessing stent failures and areas that warrant further research are critically discussed.
Sub-word image clustering in Farsi printed books
NASA Astrophysics Data System (ADS)
Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier
2015-02-01
Most OCR systems are designed for the recognition of a single page. In case of unfamiliar font faces, low quality papers and degraded prints, the performance of these products drops sharply. However, an OCR system can use redundancy of word occurrences in large documents to improve recognition results. In this paper, we propose a sub-word image clustering method for the applications dealing with large printed documents. We assume that the whole document is printed by a unique unknown font with low quality print. Our proposed method finds clusters of equivalent sub-word images with an incremental algorithm. Due to the low print quality, we propose an image matching algorithm for measuring the distance between two sub-word images, based on Hamming distance and the ratio of the area to the perimeter of the connected components. We built a ground-truth dataset of more than 111000 sub-word images to evaluate our method. All of these images were extracted from an old Farsi book. We cluster all of these sub-words, including isolated letters and even punctuation marks. Then all centers of created clusters are labeled manually. We show that all sub-words of the book can be recognized with more than 99.7% accuracy by assigning the label of each cluster center to all of its members.
IHE profiles applied to regional PACS.
Fernandez-Bayó, Josep
2011-05-01
PACS has been widely adopted as an image storage solution that perfectly fits the radiology department workflow and that can be easily extended to other hospital departments. Integrations with other hospital systems, like the Radiology Information System, the Hospital Information System and the Electronic Patient Record are fully achieved but still challenging aims. PACS also creates the perfect environment for teleradiology and teleworking setups. One step further is the regional PACS concept where different hospitals or health care enterprises share the images in an integrated Electronic Patient Record. Among the different solutions available to share images between different hospitals IHE (Integrating the Healthcare Enterprise) organization presents the Cross Enterprise Document Sharing profile (XDS) which allows sharing images from different hospitals even if they have different PACS vendors. Adopting XDS has multiple advantages, images do not need to be duplicated in a central archive to be shared among the different healthcare enterprises, they only need to be indexed and published in a central document registry. In the XDS profile IHE defines the mechanisms to publish and index the images in the central document registry. It also defines the mechanisms that each hospital will use to retrieve those images regardless on the Hospital PACS they are stored. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
A framework for biomedical figure segmentation towards image-based document retrieval
2013-01-01
The figures included in many of the biomedical publications play an important role in understanding the biological experiments and facts described within. Recent studies have shown that it is possible to integrate the information that is extracted from figures in classical document classification and retrieval tasks in order to improve their accuracy. One important observation about the figures included in biomedical publications is that they are often composed of multiple subfigures or panels, each describing different methodologies or results. The use of these multimodal figures is a common practice in bioscience, as experimental results are graphically validated via multiple methodologies or procedures. Thus, for a better use of multimodal figures in document classification or retrieval tasks, as well as for providing the evidence source for derived assertions, it is important to automatically segment multimodal figures into subfigures and panels. This is a challenging task, however, as different panels can contain similar objects (i.e., barcharts and linecharts) with multiple layouts. Also, certain types of biomedical figures are text-heavy (e.g., DNA sequences and protein sequences images) and they differ from traditional images. As a result, classical image segmentation techniques based on low-level image features, such as edges or color, are not directly applicable to robustly partition multimodal figures into single modal panels. In this paper, we describe a robust solution for automatically identifying and segmenting unimodal panels from a multimodal figure. Our framework starts by robustly harvesting figure-caption pairs from biomedical articles. We base our approach on the observation that the document layout can be used to identify encoded figures and figure boundaries within PDF files. Taking into consideration the document layout allows us to correctly extract figures from the PDF document and associate their corresponding caption. We combine pixel-level representations of the extracted images with information gathered from their corresponding captions to estimate the number of panels in the figure. Thus, our approach simultaneously identifies the number of panels and the layout of figures. In order to evaluate the approach described here, we applied our system on documents containing protein-protein interactions (PPIs) and compared the results against a gold standard that was annotated by biologists. Experimental results showed that our automatic figure segmentation approach surpasses pure caption-based and image-based approaches, achieving a 96.64% accuracy. To allow for efficient retrieval of information, as well as to provide the basis for integration into document classification and retrieval systems among other, we further developed a web-based interface that lets users easily retrieve panels containing the terms specified in the user queries. PMID:24565394
Rosas-Romero, Roberto; Martínez-Carballido, Jorge; Hernández-Capistrán, Jonathan; Uribe-Valencia, Laura J
2015-09-01
Diabetes increases the risk of developing any deterioration in the blood vessels that supply the retina, an ailment known as Diabetic Retinopathy (DR). Since this disease is asymptomatic, it can only be diagnosed by an ophthalmologist. However, the growth of the number of ophthalmologists is lower than the growth of the population with diabetes so that preventive and early diagnosis is difficult due to the lack of opportunity in terms of time and cost. Preliminary, affordable and accessible ophthalmological diagnosis will give the opportunity to perform routine preventive examinations, indicating the need to consult an ophthalmologist during a stage of non proliferation. During this stage, there is a lesion on the retina known as microaneurysm (MA), which is one of the first clinically observable lesions that indicate the disease. In recent years, different image processing algorithms, which allow the detection of the DR, have been developed; however, the issue is still open since acceptable levels of sensitivity and specificity have not yet been reached, preventing its use as a pre-diagnostic tool. Consequently, this work proposes a new approach for MA detection based on (1) reduction of non-uniform illumination; (2) normalization of image grayscale content to improve dependence of images from different contexts; (3) application of the bottom-hat transform to leave reddish regions intact while suppressing bright objects; (4) binarization of the image of interest with the result that objects corresponding to MAs, blood vessels, and other reddish objects (Regions of Interest-ROIs) are completely separated from the background; (5) application of the hit-or-miss Transformation on the binary image to remove blood vessels from the ROIs; (6) two features are extracted from a candidate to distinguish real MAs from FPs, where one feature discriminates round shaped candidates (MAs) from elongated shaped ones (vessels) through application of Principal Component Analysis (PCA); (7) the second feature is a count of the number of times that the radon transform of the candidate ROI, evaluated at the set of discrete angle values {0°, 1°, 2°, …, 180°}, is characterized by a valley between two peaks. The proposed approach is tested on the public databases DiaretDB1 and Retinopathy Online Challenge (ROC) competition. The proposed MA detection method achieves sensitivity, specificity and precision of 92.32%, 93.87% and 95.93% for the diaretDB1 database and 88.06%, 97.47% and 92.19% for the ROC database. Theory, results, challenges and performance related to the proposed MA detecting method are presented. Copyright © 2015 Elsevier Ltd. All rights reserved.
Enabling outsourcing XDS for imaging on the public cloud.
Ribeiro, Luís S; Rodrigues, Renato P; Costa, Carlos; Oliveira, José Luís
2013-01-01
Picture Archiving and Communication System (PACS) has been the main paradigm in supporting medical imaging workflows during the last decades. Despite its consolidation, the appearance of Cross-Enterprise Document Sharing for imaging (XDS-I), within IHE initiative, constitutes a great opportunity to readapt PACS workflow for inter-institutional data exchange. XDS-I provides a centralized discovery of medical imaging and associated reports. However, the centralized XDS-I actors (document registry and repository) must be deployed in a trustworthy node in order to safeguard patient privacy, data confidentiality and integrity. This paper presents XDS for Protected Imaging (XDS-p), a new approach to XDS-I that is capable of being outsourced (e.g. Cloud Computing) while maintaining privacy, confidentiality, integrity and legal concerns about patients' medical information.
Electronic Document Supply Systems.
ERIC Educational Resources Information Center
Cawkell, A. E.
1991-01-01
Describes electronic document delivery systems used by libraries and document image processing systems used for business purposes. Topics discussed include technical specifications; analogue read-only laser videodiscs; compact discs and CD-ROM; WORM; facsimile; ADONIS (Article Delivery over Network Information System); DOCDEL; and systems at the…
Mars Rover imaging systems and directional filtering
NASA Technical Reports Server (NTRS)
Wang, Paul P.
1989-01-01
Computer literature searches were carried out at Duke University and NASA Langley Research Center. The purpose is to enhance personal knowledge based on the technical problems of pattern recognition and image understanding which must be solved for the Mars Rover and Sample Return Mission. Intensive study effort of a large collection of relevant literature resulted in a compilation of all important documents in one place. Furthermore, the documents are being classified into: Mars Rover; computer vision (theory); imaging systems; pattern recognition methodologies; and other smart techniques (AI, neural networks, fuzzy logic, etc).
NASA Technical Reports Server (NTRS)
Mahy, L.; Martins, F.; Donati, J.-F.; Bouret, J.-C.
2011-01-01
We present an in-dep(h study of the two components of the binary system LZ Cep to constrain the effects of binarity on the evolution of massive stars. Methods. We analyzed a set of high-resolution, high signal-to-noise ratio optical spectra obtained over the orbital period of the system to perform a spectroscopic disentangling and derive an orbital solution. We subsequently determine the stellar properties of each component by means of an analysis with the CMFGEN atmosphere code. Finally, with the derived stellar parameters, we model the Hipparcos photometric light curve using the program NIGHTFALL to obtain the orbit inclination and the stellar masses. Results.LZ Cep is a O9III+ON9.7V binary. It is as a semi-detailed system in which either the primary or the secondary star almost fills up its Roche lobe. The dynamical masses are about 16.0 Stellar Mass (primary) and 6.5 Stellar Mass (secondary). The latter is lower than the typical mass of late-type O stars. The secondary component is chemically more evolved than the primary (which barely shows any sign of CNO processing), with strong helium and nitrogen enhancements as well as carbon and oxygen depletions. These properties (surface abundances and mass) are typical of Wolf-Rayet stars, although the spectral type is ON9.7V. The luminosity of the secondary is consistent with that of core He-burning objects. The preferred, tentative evolutionary scenario to explain abe observed properties involves mass transfer from the secondary - which was initially more massive- towards the primary. The secondary is now almost a core He-burning object, probably with only a thin envelope of H-rich and CNO processed material. A very inefficient mass transfer is necessary to explain the chemical appearance of the primary. Alternative scenarios are discussed but they are affected by greater uncertainties.
NASA Astrophysics Data System (ADS)
Mahy, L.; Martins, F.; Machado, C.; Donati, J.-F.; Bouret, J.-C.
2011-09-01
Aims: We present an in-depth study of the two components of the binary system LZ Cep to constrain the effects of binarity on the evolution of massive stars. Methods: We analyzed a set of high-resolution, high signal-to-noise ratio optical spectra obtained over the orbital period of the system to perform a spectroscopic disentangling and derive an orbital solution. We subsequently determine the stellar properties of each component by means of an analysis with the CMFGEN atmosphere code. Finally, with the derived stellar parameters, we model the Hipparcos photometric light curve using the program NIGHTFALL to obtain the orbit inclination and the stellar masses. Results: LZ Cep is a O 9III+ON 9.7V binary. It is as a semi-detached system in which either the primary or the secondary star almost fills up its Roche lobe. The dynamical masses are about 16.0 M⊙ (primary) and 6.5 M⊙ (secondary). The latter is lower than the typical mass of late-type O stars. The secondary component is chemically more evolved than the primary (which barely shows any sign of CNO processing), with strong helium and nitrogen enhancements as well as carbon and oxygen depletions. These properties (surface abundances and mass) are typical of Wolf-Rayet stars, although the spectral type is ON 9.7V. The luminosity of the secondary is consistent with that of core He-burning objects. The preferred, tentative evolutionary scenario to explain the observed properties involves mass transfer from the secondary - which was initially more massive- towards the primary. The secondary is now almost a core He-burning object, probably with only a thin envelope of H-rich and CNO processed material. A very inefficient mass transfer is necessary to explain the chemical appearance of the primary. Alternative scenarios are discussed but they are affected by greater uncertainties.
Optimisation approaches for concurrent transmitted light imaging during confocal microscopy.
Collings, David A
2015-01-01
The transmitted light detectors present on most modern confocal microscopes are an under-utilised tool for the live imaging of plant cells. As the light forming the image in this detector is not passed through a pinhole, out-of-focus light is not removed. It is this extended focus that allows the transmitted light image to provide cellular and organismal context for fluorescence optical sections generated confocally. More importantly, the transmitted light detector provides images that have spatial and temporal registration with the fluorescence images, unlike images taken with a separately-mounted camera. Because plants often provide difficulties for taking transmitted light images, with the presence of pigments and air pockets in leaves, this study documents several approaches to improving transmitted light images beginning with ensuring that the light paths through the microscope are correctly aligned (Köhler illumination). Pigmented samples can be imaged in real colour using sequential scanning with red, green and blue lasers. The resulting transmitted light images can be optimised and merged in ImageJ to generate colour images that maintain registration with concurrent fluorescence images. For faster imaging of pigmented samples, transmitted light images can be formed with non-absorbed wavelengths. Transmitted light images of Arabidopsis leaves expressing GFP can be improved by concurrent illumination with green and blue light. If the blue light used for YFP excitation is blocked from the transmitted light detector with a cheap, coloured glass filters, the non-absorbed green light will form an improved transmitted light image. Changes in sample colour can be quantified by transmitted light imaging. This has been documented in red onion epidermal cells where changes in vacuolar pH triggered by the weak base methylamine result in measurable colour changes in the vacuolar anthocyanin. Many plant cells contain visible levels of pigment. The transmitted light detector provides a useful tool for documenting and measuring changes in these pigments while maintaining registration with confocal imaging.
Graph-based layout analysis for PDF documents
NASA Astrophysics Data System (ADS)
Xu, Canhui; Tang, Zhi; Tao, Xin; Li, Yun; Shi, Cao
2013-03-01
To increase the flexibility and enrich the reading experience of e-book on small portable screens, a graph based method is proposed to perform layout analysis on Portable Document Format (PDF) documents. Digital born document has its inherent advantages like representing texts and fractional images in explicit form, which can be straightforwardly exploited. To integrate traditional image-based document analysis and the inherent meta-data provided by PDF parser, the page primitives including text, image and path elements are processed to produce text and non text layer for respective analysis. Graph-based method is developed in superpixel representation level, and page text elements corresponding to vertices are used to construct an undirected graph. Euclidean distance between adjacent vertices is applied in a top-down manner to cut the graph tree formed by Kruskal's algorithm. And edge orientation is then used in a bottom-up manner to extract text lines from each sub tree. On the other hand, non-textual objects are segmented by connected component analysis. For each segmented text and non-text composite, a 13-dimensional feature vector is extracted for labelling purpose. The experimental results on selected pages from PDF books are presented.
Degraded character recognition based on gradient pattern
NASA Astrophysics Data System (ADS)
Babu, D. R. Ramesh; Ravishankar, M.; Kumar, Manish; Wadera, Kevin; Raj, Aakash
2010-02-01
Degraded character recognition is a challenging problem in the field of Optical Character Recognition (OCR). The performance of an optical character recognition depends upon printed quality of the input documents. Many OCRs have been designed which correctly identifies the fine printed documents. But, very few reported work has been found on the recognition of the degraded documents. The efficiency of the OCRs system decreases if the input image is degraded. In this paper, a novel approach based on gradient pattern for recognizing degraded printed character is proposed. The approach makes use of gradient pattern of an individual character for recognition. Experiments were conducted on character image that is either digitally written or a degraded character extracted from historical documents and the results are found to be satisfactory.
Digital-image processing and image analysis of glacier ice
Fitzpatrick, Joan J.
2013-01-01
This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.
Nanotechnology-Enabled Optical Molecular Imaging of Breast Cancer
2008-07-01
explanation of results. 37 KEY RESEACH ACCOMPLISHMENTS • Design of needle-based fiber optic imaging system completed and development of first...policy or decision unless so designated by other documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting...our results to date. As described in the Statement of Work, Year 1 plans focused on design of this system and beginning initial construction. It
StereoGene: rapid estimation of genome-wide correlation of continuous or interval feature data.
Stavrovskaya, Elena D; Niranjan, Tejasvi; Fertig, Elana J; Wheelan, Sarah J; Favorov, Alexander V; Mironov, Andrey A
2017-10-15
Genomics features with similar genome-wide distributions are generally hypothesized to be functionally related, for example, colocalization of histones and transcription start sites indicate chromatin regulation of transcription factor activity. Therefore, statistical algorithms to perform spatial, genome-wide correlation among genomic features are required. Here, we propose a method, StereoGene, that rapidly estimates genome-wide correlation among pairs of genomic features. These features may represent high-throughput data mapped to reference genome or sets of genomic annotations in that reference genome. StereoGene enables correlation of continuous data directly, avoiding the data binarization and subsequent data loss. Correlations are computed among neighboring genomic positions using kernel correlation. Representing the correlation as a function of the genome position, StereoGene outputs the local correlation track as part of the analysis. StereoGene also accounts for confounders such as input DNA by partial correlation. We apply our method to numerous comparisons of ChIP-Seq datasets from the Human Epigenome Atlas and FANTOM CAGE to demonstrate its wide applicability. We observe the changes in the correlation between epigenomic features across developmental trajectories of several tissue types consistent with known biology and find a novel spatial correlation of CAGE clusters with donor splice sites and with poly(A) sites. These analyses provide examples for the broad applicability of StereoGene for regulatory genomics. The StereoGene C ++ source code, program documentation, Galaxy integration scripts and examples are available from the project homepage http://stereogene.bioinf.fbb.msu.ru/. favorov@sensi.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Campana, Lorenzo; Breitbeck, Robert; Bauer-Kreuz, Regula; Buck, Ursula
2016-05-01
This study evaluated the feasibility of documenting patterned injury using three dimensions and true colour photography without complex 3D surface documentation methods. This method is based on a generated 3D surface model using radiologic slice images (CT) while the colour information is derived from photographs taken with commercially available cameras. The external patterned injuries were documented in 16 cases using digital photography as well as highly precise photogrammetry-supported 3D structured light scanning. The internal findings of these deceased were recorded using CT and MRI. For registration of the internal with the external data, two different types of radiographic markers were used and compared. The 3D surface model generated from CT slice images was linked with the photographs, and thereby digital true-colour 3D models of the patterned injuries could be created (Image projection onto CT/IprojeCT). In addition, these external models were merged with the models of the somatic interior. We demonstrated that 3D documentation and visualization of external injury findings by integration of digital photography in CT/MRI data sets is suitable for the 3D documentation of individual patterned injuries to a body. Nevertheless, this documentation method is not a substitution for photogrammetry and surface scanning, especially when the entire bodily surface is to be recorded in three dimensions including all external findings, and when precise data is required for comparing highly detailed injury features with the injury-inflicting tool.
NASA Astrophysics Data System (ADS)
McEvoy, C. M.; Dufton, P. L.; Evans, C. J.; Kalari, V. M.; Markova, N.; Simón-Díaz, S.; Vink, J. S.; Walborn, N. R.; Crowther, P. A.; de Koter, A.; de Mink, S. E.; Dunstall, P. R.; Hénault-Brunet, V.; Herrero, A.; Langer, N.; Lennon, D. J.; Maíz Apellániz, J.; Najarro, F.; Puls, J.; Sana, H.; Schneider, F. R. N.; Taylor, W. D.
2015-03-01
Context. Model atmosphere analyses have been previously undertaken for both Galactic and extragalactic B-type supergiants. By contrast, little attention has been given to a comparison of the properties of single supergiants and those that are members of multiple systems. Aims: Atmospheric parameters and nitrogen abundances have been estimated for all the B-type supergiants identified in the VLT-FLAMES Tarantula survey. These include both single targets and binary candidates. The results have been analysed to investigate the role of binarity in the evolutionary history of supergiants. Methods: tlusty non-local thermodynamic equilibrium (LTE) model atmosphere calculations have been used to determine atmospheric parameters and nitrogen abundances for 34 single and 18 binary supergiants. Effective temperatures were deduced using the silicon balance technique, complemented by the helium ionisation in the hotter spectra. Surface gravities were estimated using Balmer line profiles and microturbulent velocities deduced using the silicon spectrum. Nitrogen abundances or upper limits were estimated from the N ii spectrum. The effects of a flux contribution from an unseen secondary were considered for the binary sample. Results: We present the first systematic study of the incidence of binarity for a sample of B-type supergiants across the theoretical terminal age main sequence (TAMS). To account for the distribution of effective temperatures of the B-type supergiants it may be necessary to extend the TAMS to lower temperatures. This is also consistent with the derived distribution of mass discrepancies, projected rotational velocities and nitrogen abundances, provided that stars cooler than this temperature are post-red supergiant objects. For all the supergiants in the Tarantula and in a previous FLAMES survey, the majority have small projected rotational velocities. The distribution peaks at about 50 km s-1 with 65% in the range 30 km s-1 ≤ vesini ≤ 60 km s-1. About ten per cent have larger vesini (≥100 km s-1), but surprisingly these show little or no nitrogen enhancement. All the cooler supergiants have low projected rotational velocities of ≤70 km s-1and high nitrogen abundance estimates, implying that either bi-stability braking or evolution on a blue loop may be important. Additionally, there is a lack of cooler binaries, possibly reflecting the small sample sizes. Single-star evolutionary models, which include rotation, can account for all of the nitrogen enhancement in both the single and binary samples. The detailed distribution of nitrogen abundances in the single and binary samples may be different, possibly reflecting differences in their evolutionary history. Conclusions: The first comparative study of single and binary B-type supergiants has revealed that the main sequence may be significantly wider than previously assumed, extending to Teff = 20 000 K. Some marginal differences in single and binary atmospheric parameters and abundances have been identified, possibly implying non-standard evolution for some of the sample. This sample as a whole has implications for several aspects of our understanding of the evolutionary status of blue supergiants. Tables 1, 4, 7 are available in electronic form at http://www.aanda.org
Transcript mapping for handwritten English documents
NASA Astrophysics Data System (ADS)
Jose, Damien; Bharadwaj, Anurag; Govindaraju, Venu
2008-01-01
Transcript mapping or text alignment with handwritten documents is the automatic alignment of words in a text file with word images in a handwritten document. Such a mapping has several applications in fields ranging from machine learning where large quantities of truth data are required for evaluating handwriting recognition algorithms, to data mining where word image indexes are used in ranked retrieval of scanned documents in a digital library. The alignment also aids "writer identity" verification algorithms. Interfaces which display scanned handwritten documents may use this alignment to highlight manuscript tokens when a person examines the corresponding transcript word. We propose an adaptation of the True DTW dynamic programming algorithm for English handwritten documents. The integration of the dissimilarity scores from a word-model word recognizer and Levenshtein distance between the recognized word and lexicon word, as a cost metric in the DTW algorithm leading to a fast and accurate alignment, is our primary contribution. Results provided, confirm the effectiveness of our approach.
Script-independent text line segmentation in freestyle handwritten documents.
Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi
2008-08-01
Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.
Ensemble LUT classification for degraded document enhancement
NASA Astrophysics Data System (ADS)
Obafemi-Ajayi, Tayo; Agam, Gady; Frieder, Ophir
2008-01-01
The fast evolution of scanning and computing technologies have led to the creation of large collections of scanned paper documents. Examples of such collections include historical collections, legal depositories, medical archives, and business archives. Moreover, in many situations such as legal litigation and security investigations scanned collections are being used to facilitate systematic exploration of the data. It is almost always the case that scanned documents suffer from some form of degradation. Large degradations make documents hard to read and substantially deteriorate the performance of automated document processing systems. Enhancement of degraded document images is normally performed assuming global degradation models. When the degradation is large, global degradation models do not perform well. In contrast, we propose to estimate local degradation models and use them in enhancing degraded document images. Using a semi-automated enhancement system we have labeled a subset of the Frieder diaries collection.1 This labeled subset was then used to train an ensemble classifier. The component classifiers are based on lookup tables (LUT) in conjunction with the approximated nearest neighbor algorithm. The resulting algorithm is highly effcient. Experimental evaluation results are provided using the Frieder diaries collection.1
Font group identification using reconstructed fonts
NASA Astrophysics Data System (ADS)
Cutter, Michael P.; van Beusekom, Joost; Shafait, Faisal; Breuel, Thomas M.
2011-01-01
Ideally, digital versions of scanned documents should be represented in a format that is searchable, compressed, highly readable, and faithful to the original. These goals can theoretically be achieved through OCR and font recognition, re-typesetting the document text with original fonts. However, OCR and font recognition remain hard problems, and many historical documents use fonts that are not available in digital forms. It is desirable to be able to reconstruct fonts with vector glyphs that approximate the shapes of the letters that form a font. In this work, we address the grouping of tokens in a token-compressed document into candidate fonts. This permits us to incorporate font information into token-compressed images even when the original fonts are unknown or unavailable in digital format. This paper extends previous work in font reconstruction by proposing and evaluating an algorithm to assign a font to every character within a document. This is a necessary step to represent a scanned document image with a reconstructed font. Through our evaluation method, we have measured a 98.4% accuracy for the assignment of letters to candidate fonts in multi-font documents.
SureChEMBL: a large-scale, chemically annotated patent document database.
Papadatos, George; Davies, Mark; Dedman, Nathan; Chambers, Jon; Gaulton, Anna; Siddle, James; Koks, Richard; Irvine, Sean A; Pettersson, Joe; Goncharoff, Nicko; Hersey, Anne; Overington, John P
2016-01-04
SureChEMBL is a publicly available large-scale resource containing compounds extracted from the full text, images and attachments of patent documents. The data are extracted from the patent literature according to an automated text and image-mining pipeline on a daily basis. SureChEMBL provides access to a previously unavailable, open and timely set of annotated compound-patent associations, complemented with sophisticated combined structure and keyword-based search capabilities against the compound repository and patent document corpus; given the wealth of knowledge hidden in patent documents, analysis of SureChEMBL data has immediate applications in drug discovery, medicinal chemistry and other commercial areas of chemical science. Currently, the database contains 17 million compounds extracted from 14 million patent documents. Access is available through a dedicated web-based interface and data downloads at: https://www.surechembl.org/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Integrated system for automated financial document processing
NASA Astrophysics Data System (ADS)
Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai
1997-02-01
A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.
SureChEMBL: a large-scale, chemically annotated patent document database
Papadatos, George; Davies, Mark; Dedman, Nathan; Chambers, Jon; Gaulton, Anna; Siddle, James; Koks, Richard; Irvine, Sean A.; Pettersson, Joe; Goncharoff, Nicko; Hersey, Anne; Overington, John P.
2016-01-01
SureChEMBL is a publicly available large-scale resource containing compounds extracted from the full text, images and attachments of patent documents. The data are extracted from the patent literature according to an automated text and image-mining pipeline on a daily basis. SureChEMBL provides access to a previously unavailable, open and timely set of annotated compound-patent associations, complemented with sophisticated combined structure and keyword-based search capabilities against the compound repository and patent document corpus; given the wealth of knowledge hidden in patent documents, analysis of SureChEMBL data has immediate applications in drug discovery, medicinal chemistry and other commercial areas of chemical science. Currently, the database contains 17 million compounds extracted from 14 million patent documents. Access is available through a dedicated web-based interface and data downloads at: https://www.surechembl.org/. PMID:26582922
NASA Astrophysics Data System (ADS)
Inclan, Rosa Maria
2016-04-01
Knowledge on three dimensional soil pore architecture is important to improve our understanding of the factors that control a number of critical soil processes as it controls biological, chemical and physical processes at various scales. Computed Tomography (CT) images provide increasingly reliable information about the geometry of pores and solids in soils at very small scale with the benefit that is a non-invasive technique. Fractal formalism has revealed as a useful tool in these cases where highly complex and heterogeneous meda are studied. One of these quantifications is mass dimension (Dm) and spectral dimension (d) applied to describe the water and gas diffusion coefficients in soils (Tarquis et al., 2012). In this work, intact soil samples were collected from the first three horizons of La Herreria soil. This station is located in the lowland mountain area of Sierra de Guadarrama (Santolaria et al., 2015) and it represents a highly degraded type of site as a result of the livestock keeping. The 3D images, of 45.1 micro-m resolution (256x256x256 voxels), were obtained and then binarized following the singularity-CA method (Martín-Sotoca et al. 2016). Based on these images Dm and d were estimated. The results showed an statistical difference in porosity, Dm and d for each horizon. This fact has a direct implication in diffusion parameters for a pore network modeling based on both fractal dimensions. These soil parameters will constitute a basis for site characterization for further studies regarding soil degradation; determining the interaction between soil, plant and atmosphere with respect to human induced activities as well as the basis for several nitrogen and carbon cycles modeling. References Martin Sotoca; J.J. Ana M. Tarquis, Antonio Saa Requejo, and Juan B. Grau (2016). Pore detection in Computed Tomography (CT) soil 3D images using singularity map analysis. Geophysical Research Abstracts, 18, EGU2016-829. Santolaria-Canales, Edmundo and the GuMNet Consortium Team (2015). GuMNet - Guadarrama Monitoring Network. Installation and set up of a high altitude monitoring network, north of Madrid. Spain. Geophysical Research Abstracts, 17, EGU2015-13989-2. Tarquis, A. M., Sanchez, M. E., Antón, J. M., Jimenez, J., Saa-Requejo, A., Andina, D., & Crawford, J. W. (2012). Variation in spectral and mass dimension on three-dimensional soil image processing. Soil Science, 177(2), 88-97. Web: http://www.ucm.es/gumnet/
A new EEG measure using the 1D cluster variation method
NASA Astrophysics Data System (ADS)
Maren, Alianna J.; Szu, Harold H.
2015-05-01
A new information measure, drawing on the 1-D Cluster Variation Method (CVM), describes local pattern distributions (nearest-neighbor and next-nearest neighbor) in a binary 1-D vector in terms of a single interaction enthalpy parameter h for the specific case where the fractions of elements in each of two states are the same (x1=x2=0.5). An example application of this method would be for EEG interpretation in Brain-Computer Interfaces (BCIs), especially in the frontier of invariant biometrics based on distinctive and invariant individual responses to stimuli containing an image of a person with whom there is a strong affiliative response (e.g., to a person's grandmother). This measure is obtained by mapping EEG observed configuration variables (z1, z2, z3 for next-nearest neighbor triplets) to h using the analytic function giving h in terms of these variables at equilibrium. This mapping results in a small phase space region of resulting h values, which characterizes local pattern distributions in the source data. The 1-D vector with equal fractions of units in each of the two states can be obtained using the method for transforming natural images into a binarized equi-probability ensemble (Saremi & Sejnowski, 2014; Stephens et al., 2013). An intrinsically 2-D data configuration can be mapped to 1-D using the 1-D Peano-Hilbert space-filling curve, which has demonstrated a 20 dB lower baseline using the method compared with other approaches (cf. SPIE ICA etc. by Hsu & Szu, 2014). This CVM-based method has multiple potential applications; one near-term one is optimizing classification of the EEG signals from a COTS 1-D BCI baseball hat. This can result in a convenient 3-D lab-tethered EEG, configured in a 1-D CVM equiprobable binary vector, and potentially useful for Smartphone wireless display. Longer-range applications include interpreting neural assembly activations via high-density implanted soft, cellular-scale electrodes.
Spherical Images for Cultural Heritage: Survey and Documentation with the Nikon KM360
NASA Astrophysics Data System (ADS)
Gottardi, C.; Guerra, F.
2018-05-01
The work presented here focuses on the analysis of the potential of spherical images acquired with specific cameras for documentation and three-dimensional reconstruction of Cultural Heritage. Nowadays, thanks to the introduction of cameras able to generate panoramic images automatically, without the requirement of a stitching software to join together different photos, spherical images allow the documentation of spaces in an extremely fast and efficient way. In this particular case, the Nikon Key Mission 360 spherical camera was tested on the Tolentini's cloister, which used to be part of the convent of the close church and now location of the Iuav University of Venice. The aim of the research is based on testing the acquisition of spherical images with the KM360 and comparing the obtained photogrammetric models with data acquired from a laser scanning survey in order to test the metric accuracy and the level of detail achievable with this particular camera. This work is part of a wider research project that the Photogrammetry Laboratory of the Iuav University of Venice has been dealing with in the last few months; the final aim of this research project will be not only the comparison between 3D models obtained from spherical images and laser scanning survey's techniques, but also the examination of their reliability and accuracy with respect to the previous methods of generating spherical panoramas. At the end of the research work, we would like to obtain an operational procedure for spherical cameras applied to metric survey and documentation of Cultural Heritage.
What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.
Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W
2015-06-01
Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.
49 CFR 1104.2 - Document specifications.
Code of Federal Regulations, 2014 CFR
2014-10-01
... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...
49 CFR 1104.2 - Document specifications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...
49 CFR 1104.2 - Document specifications.
Code of Federal Regulations, 2012 CFR
2012-10-01
... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...
49 CFR 1104.2 - Document specifications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...
49 CFR 1104.2 - Document specifications.
Code of Federal Regulations, 2013 CFR
2013-10-01
... to facilitate automated processing in document sheet feeders, original documents of more than one... textual submissions. Use of color in filings is limited to images such as graphs, maps and photographs. To facilitate automated processing of color pages, color pages may not be inserted among pages containing text...
Handwritten mathematical symbols dataset.
Chajri, Yassine; Bouikhalene, Belaid
2016-06-01
Due to the technological advances in recent years, paper scientific documents are used less and less. Thus, the trend in the scientific community to use digital documents has increased considerably. Among these documents, there are scientific documents and more specifically mathematics documents. In this context, we present our own dataset of handwritten mathematical symbols composed of 10,379 images. This dataset gathers Arabic characters, Latin characters, Arabic numerals, Latin numerals, arithmetic operators, set-symbols, comparison symbols, delimiters, etc.
A catalogue of chromospherically active binary stars (third edition)
NASA Astrophysics Data System (ADS)
Eker, Z.; Ak, N. Filiz; Bilir, S.; Doǧru, D.; Tüysüz, M.; Soydugan, E.; Bakış, H.; Uǧraş, B.; Soydugan, F.; Erdem, A.; Demircan, O.
2008-10-01
The catalogue of chromospherically active binaries (CABs) has been revised and updated. With 203 new identifications, the number of CAB stars is increased to 409. The catalogue is available in electronic format where each system has a number of lines (suborders) with a unique order number. The columns contain data of limited numbers of selected cross references, comments to explain peculiarities and the position of the binarity in case it belongs to a multiple system, classical identifications (RS Canum Venaticorum, BY Draconis), brightness and colours, photometric and spectroscopic data, a description of emission features (CaII H and K, Hα, ultraviolet, infrared), X-ray luminosity, radio flux, physical quantities and orbital information, where each basic entry is referenced so users can go to the original sources.
VizieR Online Data Catalog: Chromospherically Active Binaries. Third version (Eker+, 2008)
NASA Astrophysics Data System (ADS)
Eker, Z.; Filiz-Ak, N.; Bilir, S.; Dogru, D.; Tuysuz, M.; Soydugan, E.; Bakis, H.; Ugras, B.; Soydugan, F.; Erdem, A.; Demircan, O.
2008-06-01
Chromospherically Active Binaries (CAB) catalogue have been revised and updated. With 203 new identifications, the number of CAB stars is increased to 409. Catalogue is available in electronic format where each system has various number of lines (sub-orders) with a unique order number. Columns contain data of limited number of selected cross references, comments to explain peculiarities and position of the binarity in case it belongs to a multiple system, classical identifications (RS CVn, BY Dra), brightness and colours, photometric and spectroscopic data, description of emission features (Ca II H&K, Hα, UV, IR), X-Ray luminosity, radio flux, physical quantities and orbital information, where each basic entry are referenced so users can go original sources. (10 data files).
Essentializing the binary self: individualism and collectivism in cultural neuroscience.
Martínez Mateo, M; Cabanis, M; Stenmanns, J; Krach, S
2013-01-01
Within the emerging field of cultural neuroscience (CN) one branch of research focuses on the neural underpinnings of "individualistic/Western" vs. "collectivistic/Eastern" self-views. These studies uncritically adopt essentialist assumptions from classic cross-cultural research, mainly following the tradition of Markus and Kitayama (1991), into the domain of functional neuroimaging. In this perspective article we analyze recent publications and conference proceedings of the 18th Annual Meeting of the Organization for Human Brain Mapping (2012) and problematize the essentialist and simplistic understanding of "culture" in these studies. Further, we argue against the binary structure of the drawn "cultural" comparisons and their underlying Eurocentrism. Finally we scrutinize whether valuations within the constructed binarities bear the risk of constructing and reproducing a postcolonial, orientalist argumentation pattern.
Setti, E; Musumeci, R
2001-06-01
The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.
Noninvasive quantitative documentation of cutaneous inflammation in vivo using spectral imaging
NASA Astrophysics Data System (ADS)
Stamatas, Georgios N.; Kollias, Nikiforos
2006-02-01
Skin inflammation is often accompanied by edema and erythema. While erythema is the result of capillary dilation and subsequent local increase of oxygenated hemoglobin (oxy-Hb) concentration, edema is characterized by an increase in extracellular fluid in the dermis leading to local tissue swelling. Edema and erythema are typically graded visually. In this work we tested the potential of spectral imaging as a non-invasive method for quantitative documentation of both the erythema and the edema reactions. As examples of dermatological conditions that exhibit skin inflammation we imaged patients suffering from acne, herpes zoster, and poison ivy rashes using a hyperspectral-imaging camera. Spectral images were acquired in the visible and near infrared part of the spectrum, where oxy-Hb and water demonstrate absorption bands. The values of apparent concentrations of oxy-Hb and water were calculated based on an algorithm that takes into account spectral contributions of deoxy-hemoglobin, melanin, and scattering. In each case examined concentration maps of oxy-Hb and water can be constructed that represent quantitative visualizations of the intensity and extent of erythema and edema correspondingly. In summary, we demonstrate that spectral imaging can be used in dermatology to quantitatively document parameters relating to skin inflammation. Applications may include monitoring of disease progression as well as efficacy of treatments.
Clustering of Farsi sub-word images for whole-book recognition
NASA Astrophysics Data System (ADS)
Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier
2015-01-01
Redundancy of word and sub-word occurrences in large documents can be effectively utilized in an OCR system to improve recognition results. Most OCR systems employ language modeling techniques as a post-processing step; however these techniques do not use important pictorial information that exist in the text image. In case of large-scale recognition of degraded documents, this information is even more valuable. In our previous work, we proposed a subword image clustering method for the applications dealing with large printed documents. In our clustering method, the ideal case is when all equivalent sub-word images lie in one cluster. To overcome the issues of low print quality, the clustering method uses an image matching algorithm for measuring the distance between two sub-word images. The measured distance with a set of simple shape features were used to cluster all sub-word images. In this paper, we analyze the effects of adding more shape features on processing time, purity of clustering, and the final recognition rate. Previously published experiments have shown the efficiency of our method on a book. Here we present extended experimental results and evaluate our method on another book with totally different font face. Also we show that the number of the new created clusters in a page can be used as a criteria for assessing the quality of print and evaluating preprocessing phases.
NASA STI Program Seminar: Electronic documents
NASA Technical Reports Server (NTRS)
1994-01-01
The theme of this NASA Scientific and Technical Information Program Seminar was electronic documents. Topics covered included Electronic Documents Management at the CASI, the Impact of Electronic Publishing on User Expectations and Searching Image Record Management, Secondary Publisher Considerations for Electronic Journal Literature, and the Technical Manual Publishing On Demand System (TMPODS).
NASA Technical Reports Server (NTRS)
1992-01-01
This document describes the Advanced Imaging System CCD based camera. The AIS1 camera system was developed at Photometric Ltd. in Tucson, Arizona as part of a Phase 2 SBIR contract No. NAS5-30171 from the NASA/Goddard Space Flight Center in Greenbelt, Maryland. The camera project was undertaken as a part of the Space Telescope Imaging Spectrograph (STIS) project. This document is intended to serve as a complete manual for the use and maintenance of the camera system. All the different parts of the camera hardware and software are discussed and complete schematics and source code listings are provided.
Trigram-based algorithms for OCR result correction
NASA Astrophysics Data System (ADS)
Bulatov, Konstantin; Manzhikov, Temudzhin; Slavin, Oleg; Faradjev, Igor; Janiszewski, Igor
2017-03-01
In this paper we consider a task of improving optical character recognition (OCR) results of document fields on low-quality and average-quality images using N-gram models. Cyrillic fields of Russian Federation internal passport are analyzed as an example. Two approaches are presented: the first one is based on hypothesis of dependence of a symbol from two adjacent symbols and the second is based on calculation of marginal distributions and Bayesian networks computation. A comparison of the algorithms and experimental results within a real document OCR system are presented, it's showed that the document field OCR accuracy can be improved by more than 6% for low-quality images.
Whole mount nuclear fluorescent imaging: convenient documentation of embryo morphology
Sandell, Lisa L.; Kurosaka, Hiroshi; Trainor, Paul A.
2012-01-01
Here we describe a relatively inexpensive and easy method to produce high quality images that reveal fine topological details of vertebrate embryonic structures. The method relies on nuclear staining of whole mount embryos in combination with confocal microscopy or conventional widefield fluorescent microscopy. In cases where confocal microscopy is used in combination with whole mount nuclear staining, the resulting embryo images can rival the clarity and resolution of images of similar specimens produced by Scanning Electron Microscopy (SEM). The fluorescent nuclear staining may be performed with a variety of cell permeable nuclear dyes, enabling the technique to be performed with multiple standard microscope/illumination or confocal/laser systems. The method may be used to document morphology of embryos of a variety of organisms, as well as individual organs and tissues. Nuclear stain imaging imposes minimal impact on embryonic specimens, enabling imaged specimens to be utilized for additional assays. PMID:22930523
Whole mount nuclear fluorescent imaging: convenient documentation of embryo morphology.
Sandell, Lisa L; Kurosaka, Hiroshi; Trainor, Paul A
2012-11-01
Here, we describe a relatively inexpensive and easy method to produce high quality images that reveal fine topological details of vertebrate embryonic structures. The method relies on nuclear staining of whole mount embryos in combination with confocal microscopy or conventional wide field fluorescent microscopy. In cases where confocal microscopy is used in combination with whole mount nuclear staining, the resulting embryo images can rival the clarity and resolution of images produced by scanning electron microscopy (SEM). The fluorescent nuclear staining may be performed with a variety of cell permeable nuclear dyes, enabling the technique to be performed with multiple standard microscope/illumination or confocal/laser systems. The method may be used to document morphology of embryos of a variety of organisms, as well as individual organs and tissues. Nuclear stain imaging imposes minimal impact on embryonic specimens, enabling imaged specimens to be utilized for additional assays. Copyright © 2012 Wiley Periodicals, Inc.
Handwritten mathematical symbols dataset
Chajri, Yassine; Bouikhalene, Belaid
2016-01-01
Due to the technological advances in recent years, paper scientific documents are used less and less. Thus, the trend in the scientific community to use digital documents has increased considerably. Among these documents, there are scientific documents and more specifically mathematics documents. In this context, we present our own dataset of handwritten mathematical symbols composed of 10,379 images. This dataset gathers Arabic characters, Latin characters, Arabic numerals, Latin numerals, arithmetic operators, set-symbols, comparison symbols, delimiters, etc. PMID:27006975
Voigt, Jens-Uwe; Pedrizzetti, Gianni; Lysyansky, Peter; Marwick, Tom H; Houle, Hélène; Baumann, Rolf; Pedri, Stefano; Ito, Yasuhiro; Abe, Yasuhiko; Metz, Stephen; Song, Joo Hyun; Hamilton, Jamie; Sengupta, Partho P; Kolias, Theodore J; d'Hooge, Jan; Aurigemma, Gerard P; Thomas, James D; Badano, Luigi Paolo
2015-02-01
Recognizing the critical need for standardization in strain imaging, in 2010, the European Association of Echocardiography (now the European Association of Cardiovascular Imaging, EACVI) and the American Society of Echocardiography (ASE) invited technical representatives from all interested vendors to participate in a concerted effort to reduce intervendor variability of strain measurement. As an initial product of the work of the EACVI/ASE/Industry initiative to standardize deformation imaging, we prepared this technical document which is intended to provide definitions, names, abbreviations, formulas, and procedures for calculation of physical quantities derived from speckle tracking echocardiography and thus create a common standard. Copyright © 2015 American Society of Echocardiography. All rights reserved.
The Native American Experience. American Historical Images on File.
ERIC Educational Resources Information Center
Wardwell, Lelia, Ed.
This photo-documentation reference body presents more than 275 images chronicling the experiences of the American Indian from their prehistoric migrations to the present. The volume includes information and images illustrating the life ways of various tribes. The images are accompanied by historical information providing cultural context. The book…
Image/text automatic indexing and retrieval system using context vector approach
NASA Astrophysics Data System (ADS)
Qing, Kent P.; Caid, William R.; Ren, Clara Z.; McCabe, Patrick
1995-11-01
Thousands of documents and images are generated daily both on and off line on the information superhighway and other media. Storage technology has improved rapidly to handle these data but indexing this information is becoming very costly. HNC Software Inc. has developed a technology for automatic indexing and retrieval of free text and images. This technique is demonstrated and is based on the concept of `context vectors' which encode a succinct representation of the associated text and features of sub-image. In this paper, we will describe the Automated Librarian System which was designed for free text indexing and the Image Content Addressable Retrieval System (ICARS) which extends the technique from the text domain into the image domain. Both systems have the ability to automatically assign indices for a new document and/or image based on the content similarities in the database. ICARS also has the capability to retrieve images based on similarity of content using index terms, text description, and user-generated images as a query without performing segmentation or object recognition.
Optical/digital identification/verification system based on digital watermarking technology
NASA Astrophysics Data System (ADS)
Herrigel, Alexander; Voloshynovskiy, Sviatoslav V.; Hrytskiv, Zenon D.
2000-06-01
This paper presents a new approach for the secure integrity verification of driver licenses, passports or other analogue identification documents. The system embeds (detects) the reference number of the identification document with the DCT watermark technology in (from) the owner photo of the identification document holder. During verification the reference number is extracted and compared with the reference number printed in the identification document. The approach combines optical and digital image processing techniques. The detection system must be able to scan an analogue driver license or passport, convert the image of this document into a digital representation and then apply the watermark verification algorithm to check the payload of the embedded watermark. If the payload of the watermark is identical with the printed visual reference number of the issuer, the verification was successful and the passport or driver license has not been modified. This approach constitutes a new class of application for the watermark technology, which was originally targeted for the copyright protection of digital multimedia data. The presented approach substantially increases the security of the analogue identification documents applied in many European countries.
It's Not Easy Being Green: Student Recall of Plant and Animal Images
ERIC Educational Resources Information Center
Schussler, Elisabeth E.; Olzak, Lynn A.
2008-01-01
It is well documented that people are less interested in studying plants than animals. We tested whether university students would selectively recall more animal images than plant images even when equally-nameable plant and animal images were presented for equal lengths of time. Animal and plant images were pre-tested and 14 animal-plant pairs…
Virtual environments from panoramic images
NASA Astrophysics Data System (ADS)
Chapman, David P.; Deacon, Andrew
1998-12-01
A number of recent projects have demonstrated the utility of Internet-enabled image databases for the documentation of complex, inaccessible and potentially hazardous environments typically encountered in the petrochemical and nuclear industries. Unfortunately machine vision and image processing techniques have not, to date, enabled the automatic extraction geometrical data from such images and thus 3D CAD modeling remains an expensive and laborious manual activity. Recent developments in panoramic image capture and presentation offer an alternative intermediate deliverable which, in turn, offers some of the benefits of a 3D model at a fraction of the cost. Panoramic image display tools such as Apple's QuickTime VR (QTVR) and Live Spaces RealVR provide compelling and accessible digital representations of the real world and justifiably claim to 'put the reality in Virtual Reality.' This paper will demonstrate how such technologies can be customized, extended and linked to facility management systems delivered over a corporate intra-net to enable end users to become familiar with remote sites and extract simple dimensional data. In addition strategies for the integration of such images with documents gathered from 2D or 3D CAD and Process and Instrumentation Diagrams (P&IDs) will be described as will techniques for precise 'As-Built' modeling using the calibrated images from which panoramas have been derived and the use of textures from these images to increase the realism of rendered scenes. A number of case studies relating to both nuclear and process engineering will demonstrate the extent to which such solution are scaleable in order to deal with the very large volumes of image data required to fully document the large, complex facilities typical of these industry sectors.
26 CFR 1.1471-1 - Scope of chapter 4 and definitions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... an image retrieval system (such as portable document format (.pdf) or scanned documents). (35) Entity..., custodial institution, or specified insurance company. (124) TIN. The term TIN means the tax identifying...
26 CFR 1.1471-1 - Scope of chapter 4 and definitions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... an image retrieval system (such as portable document format (.pdf) or scanned documents). (39) Entity..., custodial institution, or specified insurance company. (133) TIN. The term TIN means the tax identifying...
[Nursing in the movies: its image during the Spanish Civil War].
Siles González, J; García Hernández, E; Cibanal Juan, L; Gallardo Frías, Y; Lillo Crespo, M
1998-12-01
The cinema had carried out a determining role in the development of stereotypes and in a wide gamut of models related to real life situations. The objective of this analysis is to determine the influence cinema had on the image of nurses during the Spanish Civil War from 1936-1939. These are the initial hypotheses: the role of Spanish nurses during the civil war was reflected by both sides in their respective movie productions; and the image of nurses shown in these films, on both sides, presents a conflicting role concept for women in society. Following strategies developed by specialists in film analysis (Bondwell 1995, Uneso 1995, Carmona 1991) a total of 453 movie productions, 360 on the republican side and 93 on the national side, were reviewed. These films were listed in the Spanish National Films Library records. After analyzing the Spanish cinema productions during the Spanish Civil War, data relating to 453 films were identified. The genre included documents, news programs and fiction movies. 77 were produced in 1936, 235 in 1937, 102 in 1938 and 39 in 1939. A tremendous difference exists between the republican productions, 79% of the total, and the national productions. By genres, the types produced on the republican side were: in 1936, 53 documentals, 4 news programs and 9 fiction films; in 1937, 186 documentals, 5 news programs and 19 fiction films; in 1938, 72 documentals, 1 news programs and 2 fiction films; in 1939, 2 documentals and 2 fiction films. On the national side, their productions were: in 1936, 10 documentals and 1 fiction film; in 1937, 22 documentals, 2 news programs and 1 fiction film; in 1938, 19 documentals and 3 news programs; in 1939, 29 documentals and 6 fiction films. During the Spanish Civil War, movies produced by both sides made an effort to reflect their ideal woman as a stereotypical ideal nurse. This ideal nurse showed the values, ideas, aesthetics and prejudices each side held in the war.
High Resolution Global Topography of Itokawa from Hayabusa Imaging and LIDAR Data
NASA Technical Reports Server (NTRS)
Gaskell, Robert W.; Barnouin-Jha, O. S.; Scheeres, D. J.; Mukai, T.; Hirata, N.; Abe, S.; Saito, J.; Hashimoto, T.; Ishiguro, M.; Kubota, T.
2006-01-01
This viewgraph document reviews the topography of the Itokawa asteroid. It summarizes some of the relevant information about the asteroid, and how using the imaging from Hayabusa and LIDAR data, a topographic image of Itokawa was derived.
Automatic script identification from images using cluster-based templates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, J.; Kerns, L.; Kelly, P.
We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a newmore » document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.« less
Informatics in radiology: use of CouchDB for document-based storage of DICOM objects.
Rascovsky, Simón J; Delgado, Jorge A; Sanz, Alexander; Calvo, Víctor D; Castrillón, Gabriel
2012-01-01
Picture archiving and communication systems traditionally have depended on schema-based Structured Query Language (SQL) databases for imaging data management. To optimize database size and performance, many such systems store a reduced set of Digital Imaging and Communications in Medicine (DICOM) metadata, discarding informational content that might be needed in the future. As an alternative to traditional database systems, document-based key-value stores recently have gained popularity. These systems store documents containing key-value pairs that facilitate data searches without predefined schemas. Document-based key-value stores are especially suited to archive DICOM objects because DICOM metadata are highly heterogeneous collections of tag-value pairs conveying specific information about imaging modalities, acquisition protocols, and vendor-supported postprocessing options. The authors used an open-source document-based database management system (Apache CouchDB) to create and test two such databases; CouchDB was selected for its overall ease of use, capability for managing attachments, and reliance on HTTP and Representational State Transfer standards for accessing and retrieving data. A large database was created first in which the DICOM metadata from 5880 anonymized magnetic resonance imaging studies (1,949,753 images) were loaded by using a Ruby script. To provide the usual DICOM query functionality, several predefined "views" (standard queries) were created by using JavaScript. For performance comparison, the same queries were executed in both the CouchDB database and a SQL-based DICOM archive. The capabilities of CouchDB for attachment management and database replication were separately assessed in tests of a similar, smaller database. Results showed that CouchDB allowed efficient storage and interrogation of all DICOM objects; with the use of information retrieval algorithms such as map-reduce, all the DICOM metadata stored in the large database were searchable with only a minimal increase in retrieval time over that with the traditional database management system. Results also indicated possible uses for document-based databases in data mining applications such as dose monitoring, quality assurance, and protocol optimization. RSNA, 2012
NASA Astrophysics Data System (ADS)
Oommen, T.; Baise, L. G.; Gens, R.; Prakash, A.; Gupta, R. P.
2009-12-01
Historically, earthquake induced liquefaction is known to have caused extensive damage around the world. Therefore, there is a compelling need to characterize and map liquefaction after a seismic event. Currently, after an earthquake event, field-based mapping of liquefaction is sporadic and limited due to inaccessibility, short life of the failures, difficulties in mapping large aerial extents, and lack of resources. We hypothesize that as liquefaction occurs in saturated granular soils due to an increase in pore pressure, the liquefaction related terrain changes should have an associated increase in soil moisture with respect to the surrounding non-liquefied regions. The increase in soil moisture affects the thermal emittance and, hence, change detection using pre- and post-event thermal infrared (TIR) imagery is suitable for identifying areas that have undergone post-earthquake liquefaction. Though change detection using TIR images gives the first indication of areas of liquefaction, the spatial resolution of TIR images is typically coarser than the resolution of corresponding visible, near-infrared (NIR), and shortwave infrared (SWIR) images. We hypothesize that liquefaction induced changes in the soil and associated surface effects cause textural and spectral changes in images acquired in the visible, NIR, and SWIR. Although these changes can be from various factors, a synergistic approach taking advantage of the thermal signature variation due to changing soil moisture condition, together with the spectral information from high resolution visible, NIR, and SWIR bands can help to narrow down the locations of post-event liquefaction for regional documentation. In this study, we analyze the applicability of combining various spectral bands from different satellites (Landsat, Terra-MISR, IRS-1C, and IRS-1D) for documenting liquefaction failures associated with the magnitude 7.6 earthquake that occurred in Bhuj, India, in 2001. We combine the various spectral bands by neighborhood correlation image analysis using an artificial intelligence algorithm called support vector machine to remotely identify and document liquefaction failures across a region; and assess the reliability and accuracy of the thermal remote sensing approach in documenting regional liquefaction failures. Finally, we present the applicability of the satellite data analyzed and appropriateness of a multisensor and multispectral approach for documenting liquefaction related failures.
Targeting youth and concerned smokers: evidence from Canadian tobacco industry documents
Pollay, R.
2000-01-01
OBJECTIVE—To provide an understanding of the targeting strategies of cigarette marketing, and the functions and importance of the advertising images chosen. METHODS—Analysis of historical corporate documents produced by affiliates of British American Tobacco (BAT) and RJ Reynolds (RJR) in Canadian litigation challenging tobacco advertising regulation, the Tobacco Products Control Act (1987): Imperial Tobacco Limitee & RJR-Macdonald Inc c. Le Procurer General du Canada. RESULTS—Careful and extensive research has been employed in all stages of the process of conceiving, developing, refining, and deploying cigarette advertising. Two segments commanding much management attention are "starters" and "concerned smokers". To recruit starters, brand images communicate independence, freedom and (sometimes) peer acceptance. These advertising images portray smokers as attractive and autonomous, accepted and admired, athletic and at home in nature. For "lighter" brands reassuring health concerned smokers, lest they quit, advertisements provide imagery conveying a sense of well being, harmony with nature, and a consumer's self image as intelligent. CONCLUSIONS—The industry's steadfast assertions that its advertising influences only brand loyalty and switching in both its intent and effect is directly contradicted by their internal documents and proven false. So too is the justification of cigarette advertising as a medium creating better informed consumers, since visual imagery, not information, is the means of advertising influence. Keywords: advertising; brand imagery; market research; youth targeting; "concerned" smokers; corporate documents PMID:10841849
Fra Angelico's painting technique revealed by terahertz time-domain imaging (THz-TDI)
NASA Astrophysics Data System (ADS)
Koch Dandolo, Corinna Ludovica; Picollo, Marcello; Cucci, Costanza; Jepsen, Peter Uhd
2016-10-01
We have investigated with terahertz time-domain imaging (THz-TDI) the well-known Lamentation over the dead Christ panel painting (San Marco Museum, Florence) painted by Fra Giovanni Angelico within 1436 and 1441. The investigation provided a better understanding of the construction and gilding technique used by the eminent artist, as well as the plastering technique used during the nineteenth-century restoration intervention. The evidence obtained from THz-TDI scans was correlated with the available documentation on the preservation history of the art piece. Erosion and damages documented for the wooden support, especially in the lower margin, found confirmation in the THz-TD images.
Extraction and labeling high-resolution images from PDF documents
NASA Astrophysics Data System (ADS)
Chachra, Suchet K.; Xue, Zhiyun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
2013-12-01
Accuracy of content-based image retrieval is affected by image resolution among other factors. Higher resolution images enable extraction of image features that more accurately represent the image content. In order to improve the relevance of search results for our biomedical image search engine, Open-I, we have developed techniques to extract and label high-resolution versions of figures from biomedical articles supplied in the PDF format. Open-I uses the open-access subset of biomedical articles from the PubMed Central repository hosted by the National Library of Medicine. Articles are available in XML and in publisher supplied PDF formats. As these PDF documents contain little or no meta-data to identify the embedded images, the task includes labeling images according to their figure number in the article after they have been successfully extracted. For this purpose we use the labeled small size images provided with the XML web version of the article. This paper describes the image extraction process and two alternative approaches to perform image labeling that measure the similarity between two images based upon the image intensity projection on the coordinate axes and similarity based upon the normalized cross-correlation between the intensities of two images. Using image identification based on image intensity projection, we were able to achieve a precision of 92.84% and a recall of 82.18% in labeling of the extracted images.
Long-term pavement performance ancillary information management system (AIMS) reference guide.
DOT National Transportation Integrated Search
2012-11-01
This document provides information on the Long-Term Pavement Performance (LTPP) program ancillary information. : Ancillary information includes data, images, reference materials, resource documents, and other information that : support and extend the...
What a Difference a Year Makes.
ERIC Educational Resources Information Center
Birt, Carina
1998-01-01
Addresses the growth of signatures in document management. Describes the three basic types of electronic signature technology: image signatures, digital signatures, and digitized biometric signatures. Discusses legal and regulatory acceptability and bringing signatures into document management. (AEF)
Interpretation of Radiological Images: Towards a Framework of Knowledge and Skills
ERIC Educational Resources Information Center
van der Gijp, A.; van der Schaaf, M. F.; van der Schaaf, I. C.; Huige, J. C. B. M.; Ravesloot, C. J.; van Schaik, J. P. J.; ten Cate, Th. J.
2014-01-01
The knowledge and skills that are required for radiological image interpretation are not well documented, even though medical imaging is gaining importance. This study aims to develop a comprehensive framework of knowledge and skills, required for two-dimensional and multiplanar image interpretation in radiology. A mixed-method study approach was…
The luminosities of the coldest brown dwarfs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tinney, C. G.; Faherty, Jacqueline K.; Kirkpatrick, J. Davy
2014-11-20
In recent years, brown dwarfs have been extended to a new Y-dwarf class with effective temperatures colder than 500 K and masses in the range of 5-30 Jupiter masses. They fill a crucial gap in observable atmospheric properties between the much colder gas-giant planets of our own solar system (at around 130 K) and both hotter T-type brown dwarfs and the hotter planets that can be imaged orbiting young nearby stars (both with effective temperatures in the range of 1500-1000 K). Distance measurements for these objects deliver absolute magnitudes that make critical tests of our understanding of very cool atmospheres.more » Here we report new distances for nine Y dwarfs and seven very late T dwarfs. These reveal that Y dwarfs do indeed represent a continuation of the T-dwarf sequence to both fainter luminosities and cooler temperatures. They also show that the coolest objects display a large range in absolute magnitude for a given photometric color. The latest atmospheric models show good agreement with the majority of these Y-dwarf absolute magnitudes. This is also the case for WISE0855-0714, the coldest and closest brown dwarf to the Sun, which shows evidence for water ice clouds. However, there are also some outstanding exceptions, which suggest either binarity or the presence of condensate clouds. The former is readily testable with current adaptive optics facilities. The latter would mean that the range of cloudiness in Y dwarfs is substantial with most hosting almost no clouds—while others have dense clouds, making them prime targets for future variability observations to study cloud dynamics.« less
DEBRIS DISKS OF MEMBERS OF THE BLANCO 1 OPEN CLUSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stauffer, John R.; Noriega-Crespo, Alberto; Rebull, Luisa M.
2010-08-20
We have used the Spitzer Space Telescope to obtain Multiband Imaging Photometer for Spitzer (MIPS) 24 {mu}m photometry for 37 members of the {approx}100 Myr old open cluster Blanco 1. For the brightest 25 of these stars (where we have 3{sigma} uncertainties less than 15%), we find significant mid-IR excesses for eight stars, corresponding to a debris disk detection frequency of about 32%. The stars with excesses include two A stars, four F dwarfs, and two G dwarfs. The most significant linkage between 24 {mu}m excess and any other stellar property for our Blanco 1 sample of stars is withmore » binarity. Blanco 1 members that are photometric binaries show few or no detected 24 {mu}m excesses whereas a quarter of the apparently single Blanco 1 members do have excesses. We have examined the MIPS data for two other clusters of similar age to Blanco 1-NGC 2547 and the Pleiades. The AFGK photometric binary star members of both of these clusters also show a much lower frequency of 24 {mu}m excesses compared to stars that lie near the single-star main sequence. We provide a new determination of the relation between the V - K {sub s} color and K {sub s} - [24] color for main sequence photospheres based on Hyades members observed with MIPS. As a result of our analysis of the Hyades data, we identify three low mass Hyades members as candidates for having debris disks near the MIPS detection limit.« less
NASA Astrophysics Data System (ADS)
Moutou, C.; Vigan, A.; Mesa, D.; Desidera, S.; Thébault, P.; Zurlo, A.; Salter, G.
2017-06-01
We explore the multiplicity of exoplanet host stars with high-resolution images obtained with VLT/SPHERE. Two different samples of systems were observed: one containing low-eccentricity outer planets, and the other containing high-eccentricity outer planets. We find that 10 out of 34 stars in the high-eccentricity systems are members of a binary, while the proportion is 3 out of 27 for circular systems. Eccentric-exoplanet hosts are, therefore, significantly more likely to have a stellar companion than circular-exoplanet hosts. The median magnitude contrast over the 68 data sets is 11.26 and 9.25, in H and K, respectively, at 0.30 arcsec. The derived detection limits reveal that binaries with separations of less than 50 au are rarer for exoplanet hosts than for field stars. Our results also imply that the majority of high-eccentricity planets are not embedded in multiple stellar systems (24 out of 34), since our detection limits exclude the presence of a stellar companion. We detect the low-mass stellar companions of HD 7449 and HD 211847, both members of our high-eccentricity sample. HD 7449B was already detected and our independent observation is in agreement with this earlier work. HD 211847's substellar companion, previously detected by the radial velocity method, is actually a low-mass star seen face-on. The role of stellar multiplicity in shaping planetary systems is confirmed by this work, although it does not appear as the only source of dynamical excitation. Based on observations collected with SPHERE on the Very Large Telescope (ESO, Chile).
ERIC Educational Resources Information Center
Suarez, Stephanie Cox; Daniels, Karen J.
2009-01-01
This case study uses documentation as a tool for formative assessment to interpret the learning of twin boys with significantly delayed language skills. Reggio-inspired documentation (the act of collecting, interpreting, and reflecting on traces of learning from video, images, and observation notes) focused on the unfolding of the boys' nonverbal…
Business Documents Don't Have to Be Boring
ERIC Educational Resources Information Center
Schultz, Benjamin
2006-01-01
With business documents, visuals can serve to enhance the written word in conveying the message. Images can be especially effective when used subtly, on part of the page, on successive pages to provide continuity, or even set as watermarks over the entire page. A main reason given for traditional text-only business documents is that they are…
Description and Evaluation of a Four-Channel, Coherent 100-kHz Sidescan Sonar
2004-12-01
document contains color images. 14. ABSTRACT This report documents the design and features of a new, four-channel, coherent 100-kHz sidescan sonar...Atlantic Technical Memorandum DRDC Atlantic TM 2004-204 December 2004 Abstract This report documents the design and features of a new...Results This report documents the design and features of this new high-frequency sonar system. These initial field trial results demonstrate some of
Atmospheric Science Data Center
2013-04-29
... Basis Documents . Images available on this web site include the following parameters: Image Description ... DHR integrated over the Photosynthetically Active Radiation (PAR) band. For those familiar with the MISR Level 2 ...
Optical spectrum variations of IL Cep A
NASA Astrophysics Data System (ADS)
Ismailov, N. Z.; Khalilov, O. V.; Bakhaddinova, G. R.
2016-02-01
The results of many-year uniform spectroscopic observations of the Herbig Ae/Be star IL Cep A are presented. Its Hα line has either a single or a barely resolved two-component emission profile. The H β emission line is clearly divided into two components with a deep central absorption. Smooth variations of the observed parameters of individual spectral lines over nine years are observed. The He I λ5876 Å line has a complex absorption profile, probably with superposed emission components. The NaI D1, D2 doublet exhibits weak changes due to variations in the circumstellar envelope. The variations observed in the stellar spectrum can be explained by either binarity or variations of the magnetic field in the stellar disk. Difficulties associated with both these possibilities are discussed.
Investigating the structure preserving encryption of high efficiency video coding (HEVC)
NASA Astrophysics Data System (ADS)
Shahid, Zafar; Puech, William
2013-02-01
This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.
Essentializing the binary self: individualism and collectivism in cultural neuroscience
Martínez Mateo, M.; Cabanis, M.; Stenmanns, J.; Krach, S.
2013-01-01
Within the emerging field of cultural neuroscience (CN) one branch of research focuses on the neural underpinnings of “individualistic/Western” vs. “collectivistic/Eastern” self-views. These studies uncritically adopt essentialist assumptions from classic cross-cultural research, mainly following the tradition of Markus and Kitayama (1991), into the domain of functional neuroimaging. In this perspective article we analyze recent publications and conference proceedings of the 18th Annual Meeting of the Organization for Human Brain Mapping (2012) and problematize the essentialist and simplistic understanding of “culture” in these studies. Further, we argue against the binary structure of the drawn “cultural” comparisons and their underlying Eurocentrism. Finally we scrutinize whether valuations within the constructed binarities bear the risk of constructing and reproducing a postcolonial, orientalist argumentation pattern. PMID:23801954
A Complete OCR System for Tamil Magazine Documents
NASA Astrophysics Data System (ADS)
Kokku, Aparna; Chakravarthy, Srinivasa
We present a complete optical character recognition (OCR) system for Tamil magazines/documents. All the standard elements of OCR process like de-skewing, preprocessing, segmentation, character recognition, and reconstruction are implemented. Experience with OCR problems teaches that for most subtasks of OCR, there is no single technique that gives perfect results for every type of document image. We exploit the ability of neural networks to learn from experience in solving the problems of segmentation and character recognition. Text segmentation of Tamil newsprint poses a new challenge owing to its italic-like font type; problems that arise in recognition of touching and close characters are discussed. Character recognition efficiency varied from 94 to 97% for this type of font. The grouping of blocks into logical units and the determination of reading order within each logical unit helped us in reconstructing automatically the document image in an editable format.
Machine printed text and handwriting identification in noisy document images.
Zheng, Yefeng; Li, Huiping; Doermann, David
2004-03-01
In this paper, we address the problem of the identification of text in noisy document images. We are especially focused on segmenting and identifying between handwriting and machine printed text because: 1) Handwriting in a document often indicates corrections, additions, or other supplemental information that should be treated differently from the main content and 2) the segmentation and recognition techniques requested for machine printed and handwritten text are significantly different. A novel aspect of our approach is that we treat noise as a separate class and model noise based on selected features. Trained Fisher classifiers are used to identify machine printed text and handwriting from noise and we further exploit context to refine the classification. A Markov Random Field-based (MRF) approach is used to model the geometrical structure of the printed text, handwriting, and noise to rectify misclassifications. Experimental results show that our approach is robust and can significantly improve page segmentation in noisy document collections.
NASA Astrophysics Data System (ADS)
Themistocleous, K.; Agapiou, A.; Hadjimitsis, D.
2016-10-01
The documentation of architectural cultural heritage sites has traditionally been expensive and labor-intensive. New innovative technologies, such as Unmanned Aerial Vehicles (UAVs), provide an affordable, reliable and straightforward method of capturing cultural heritage sites, thereby providing a more efficient and sustainable approach to documentation of cultural heritage structures. In this study, hundreds of images of the Panagia Chryseleousa church in Foinikaria, Cyprus were taken using a UAV with an attached high resolution camera. The images were processed to generate an accurate digital 3D model by using Structure in Motion techniques. Building Information Model (BIM) was then used to generate drawings of the church. The methodology described in the paper provides an accurate, simple and cost-effective method of documenting cultural heritage sites and generating digital 3D models using novel techniques and innovative methods.
Overcoming the Polyester Image.
ERIC Educational Resources Information Center
Regan, Dorothy
1988-01-01
Urges community colleges to overcome their image problem by documenting the colleges' impact on their communities. Suggests ways to determine what data should be collected, how to collect the information, and how it can be used to empower faculty, staff, and alumni to change the institution's image. (DMM)
Satellite Imaging in the Study of Pennsylvania's Environmental Issues.
ERIC Educational Resources Information Center
Nous, Albert P.
This document focuses on using satellite images from space in the classroom. There are two types of environmental satellites routinely broadcasting: (1) Polar-Orbiting Operational Environmental Satellites (POES), and (2) Geostationary Operational Environmental Satellites (GOES). Imaging and visualization techniques provide students with a better…
IDAPS (Image Data Automated Processing System) System Description
1988-06-24
This document describes the physical configuration and components used in the image processing system referred to as IDAPS (Image Data Automated ... Processing System). This system was developed by the Environmental Research Institute of Michigan (ERIM) for Eglin Air Force Base. The system is designed
ERIC Educational Resources Information Center
Haapaniemi, Peter
1990-01-01
Describes imaging technology, which allows huge numbers of words and illustrations to be reduced to tiny fraction of space required by originals and discusses current applications. Highlights include image processing system at National Archives; use by banks for high-speed check processing; engineering document management systems (EDMS); folder…
NASA Astrophysics Data System (ADS)
Silver, K.; Silver, M.; Törmä, M.; Okkonen, J.; Okkonen, T.
2017-08-01
In 2015-2016 the Finnish-Swedish Archaeological Project in Mesopotamia (FSAPM) initiated a pilot study of an unexplored area in the Tūr Abdin region in Northern Mesopotamia (present-day Mardin Province in southeastern Turkey). FSAPM is reliant on satellite image data sources for prospecting, identifying, recording, and mapping largely unknown archaeological sites as well as studying their landscapes in the region. The purpose is to record and document sites in this endangered area for saving its cultural heritage. The sites in question consist of fortified architectural remains in an ancient border zone between the Graeco-Roman/Byzantine world and Parthia/Persia. The location of the archaeological sites in the terrain and the visible archaeological remains, as well as their dimensions and sizes were determined from the ortorectified satellite images, which also provided coordinates. In addition, field documentation was carried out in situ with photographs and notes. The applicability of various satellite data sources for the archaeological documentation of the project was evaluated. Satellite photographs from three 1968 CORONA missions, i.e. the declassified US government satellite photograph archives were acquired. Furthermore, satellite images included a recent GeoEye-1 Satellite Sensor Image from 2010 with a resolution of 0.5 m. Its applicability for prospecting archaeological sites, studying the terrain and producing landscape models in 3D was confirmed. The GeoEye-1 revealed the ruins of a fortified town and a fortress for their documentation and study. Landscape models for the area of these sites were constructed fusing GeoEye-1 with EU-DEM (European Digital Elevation Model data using SRTM and ASTER GDEM data) in order to understand their locations in the terrain.
Chain of evidence generation for contrast enhancement in digital image forensics
NASA Astrophysics Data System (ADS)
Battiato, Sebastiano; Messina, Giuseppe; Strano, Daniela
2010-01-01
The quality of the images obtained by digital cameras has improved a lot since digital cameras early days. Unfortunately, it is not unusual in image forensics to find wrongly exposed pictures. This is mainly due to obsolete techniques or old technologies, but also due to backlight conditions. To extrapolate some invisible details a stretching of the image contrast is obviously required. The forensics rules to produce evidences require a complete documentation of the processing steps, enabling the replication of the entire process. The automation of enhancement techniques is thus quite difficult and needs to be carefully documented. This work presents an automatic procedure to find contrast enhancement settings, allowing both image correction and automatic scripting generation. The technique is based on a preprocessing step which extracts the features of the image and selects correction parameters. The parameters are thus saved through a JavaScript code that is used in the second step of the approach to correct the image. The generated script is Adobe Photoshop compliant (which is largely used in image forensics analysis) thus permitting the replication of the enhancement steps. Experiments on a dataset of images are also reported showing the effectiveness of the proposed methodology.
Automatic extraction of numeric strings in unconstrained handwritten document images
NASA Astrophysics Data System (ADS)
Haji, M. Mehdi; Bui, Tien D.; Suen, Ching Y.
2011-01-01
Numeric strings such as identification numbers carry vital pieces of information in documents. In this paper, we present a novel algorithm for automatic extraction of numeric strings in unconstrained handwritten document images. The algorithm has two main phases: pruning and verification. In the pruning phase, the algorithm first performs a new segment-merge procedure on each text line, and then using a new regularity measure, it prunes all sequences of characters that are unlikely to be numeric strings. The segment-merge procedure is composed of two modules: a new explicit character segmentation algorithm which is based on analysis of skeletal graphs and a merging algorithm which is based on graph partitioning. All the candidate sequences that pass the pruning phase are sent to a recognition-based verification phase for the final decision. The recognition is based on a coarse-to-fine approach using probabilistic RBF networks. We developed our algorithm for the processing of real-world documents where letters and digits may be connected or broken in a document. The effectiveness of the proposed approach is shown by extensive experiments done on a real-world database of 607 documents which contains handwritten, machine-printed and mixed documents with different types of layouts and levels of noise.
Image analysis library software development
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Bryant, J.
1977-01-01
The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.
Advanced Medical Technology and Network Systems Research.
1999-09-01
for image-guided therapies . Advanced technologies included in this report are impedance imaging and a palpation training system. 14. SUBJECT...Summary 1 Virtual Clinic for Patients with Chronic Illness Project Planning Document • 2 Telemedicine for Hemodialysis 21 A...imaging systems and’ surgical procedures effort is accomplished in part by establishing the technology requirements for image-guided therapies . Advanced
BMC Ecology Image Competition 2016: the winning images.
Simundza, Julia; Palmer, Matthew; Settele, Josef; Jacobus, Luke M; Hughes, David P; Mazzi, Dominique; Blanchet, Simon
2016-08-09
The 2016 BMC Ecology Image Competition marked another celebration of the astounding biodiversity, natural beauty, and biological interactions documented by talented ecologists worldwide. For our fourth annual competition, we welcomed guest judge Dr. Matthew Palmer of Columbia University, who chose the winning image from over 140 entries. In this editorial, we highlight the award winning images along with a selection of highly commended honorable mentions.
NASA Astrophysics Data System (ADS)
Rahgozar, M. Armon; Hastings, Tom; McCue, Daniel L.
1997-04-01
The Internet is rapidly changing the traditional means of creation, distribution and retrieval of information. Today, information publishers leverage the capabilities provided by Internet technologies to rapidly communicate information to a much wider audience in unique customized ways. As a result, the volume of published content has been astronomically increasing. This, in addition to the ease of distribution afforded by the Internet has resulted in more and more documents being printed. This paper introduces several axes along which Internet printing may be examined and addresses some of the technological challenges that lay ahead. Some of these axes include: (1) submission--the use of the Internet protocols for selecting printers and submitting documents for print, (2) administration--the management and monitoring of printing engines and other print resources via Web pages, and (3) formats--printing document formats whose spectrum now includes HTML documents with simple text, layout-enhanced documents with Style Sheets, documents that contain audio, graphics and other active objects as well as the existing desktop and PDL formats. The format axis of the Internet Printing becomes even more exciting when one considers that the Web documents are inherently compound and the traversal into the various pieces may uncover various formats. The paper also examines some imaging specific issues that are paramount to Internet Printing. These include formats and structures for representing raster documents and images, compression, fonts rendering and color spaces.
Minimal camera networks for 3D image based modeling of cultural heritage objects.
Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma
2014-03-25
3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.
Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects
Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma
2014-01-01
3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718
How Older People Think about Images of Aging in Advertising and the Media.
ERIC Educational Resources Information Center
Bradley, Don E.; Longino, Charles F., Jr.
2001-01-01
A literature review documents distorted images of aging in mass media and advertising, including underrepresentation and stereotyping. Older consumers are dissatisfied with these images, and their growing purchasing power is forcing advertisers to make more effective appeals. (Contains 20 references.) (SK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chappard, Christine; Basillais, Armelle; Benhamou, Laurent
Microcomputed tomography ({mu}CT) produces three-dimensional (3D) images of trabecular bone. We compared conventional {mu}CT (C{mu}CT) with a polychromatic x-ray cone beam to synchrotron radiation (SR) {mu}CT with a monochromatic parallel beam for assessing trabecular bone microarchitecture of 14 subchondral femoral head specimens from patients with osteoarthritis (n=10) or osteoporosis (n=4). SR{mu}CT images with a voxel size of 10.13 {mu}m were reconstructed from 900 2D radiographic projections (angular step, 0.2 deg. ). C{mu}CT images with a voxel size of 10.77 {mu}m were reconstructed from 205, 413, and 825 projections obtained using angular steps of 0.9 deg., 0.45 deg., and 0.23 deg.,more » respectively. A single threshold was used to binarize the images. We computed bone volume/tissue volume (BV/TV), bone surface/bone volume (BS/BV), trabecular number (Tb.N), trabecular thickness (Tb.Th and Tb.Th*), trabecular spacing (Tb.Sp), degree of anisotropy (DA), and Euler density. With the 0.9 deg. angular step, all C{mu}CT values were significantly different from SR{mu}CT values. With the 0.23 deg. and 0.45 deg. rotation steps, BV/TV, Tb.Th, and BS/BV by C{mu}CT differed significantly from the values by SR{mu}CT. The error due to slice matching (visual site matching {+-}10 slices) was within 1% for most parameters. Compared to SR{mu}CT, BV/TV, Tb.Sp, and Tb.Th by C{mu}CT were underestimated, whereas Tb.N and Tb.Th* were overestimated. A Bland and Altman plot showed no bias for Tb.N or DA. Bias was -0.8{+-}1.0%, +5.0{+-}1.1 {mu}m, -5.9{+-}6.3 {mu}m, and -5.7{+-}29.1 {mu}m for BV/TV, Tb.Th*, Tb.Th, and Tb.Sp, respectively, and the differences did not vary over the range of values. Although systematic differences were noted between SR{mu}CT and C{mu}CT values, correlations between the techniques were high and the differences would probably not change the discrimination between study groups. C{mu}CT provides a reliable 3D assessment of human defatted bone when working at the 0.23 deg. or 0.45 deg. rotation step; the 0.9 deg. rotation step may be insufficiently accurate for morphological bone analysis.« less
Segmentation-driven compound document coding based on H.264/AVC-INTRA.
Zaghetto, Alexandre; de Queiroz, Ricardo L
2007-07-01
In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.
2010-11-05
The Food and Drug Administration (FDA) is announcing the reclassification of the full-field digital mammography (FFDM) system from class III (premarket approval) to class II (special controls). The device type is intended to produce planar digital x-ray images of the entire breast; this generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component parts, and accessories. The special control that will apply to the device is the guidance document entitled "Class II Special Controls Guidance Document: Full-Field Digital Mammography System." FDA is reclassifying the device into class II (special controls) because general controls along with special controls will provide a reasonable assurance of safety and effectiveness of the device. Elsewhere in this issue of the Federal Register, FDA is announcing the availability of the guidance document that will serve as the special control for this device.
Zhu, Wensheng; Yuan, Ying; Zhang, Jingwen; Zhou, Fan; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-02-01
The aim of this paper is to systematically evaluate a biased sampling issue associated with genome-wide association analysis (GWAS) of imaging phenotypes for most imaging genetic studies, including the Alzheimer's Disease Neuroimaging Initiative (ADNI). Specifically, the original sampling scheme of these imaging genetic studies is primarily the retrospective case-control design, whereas most existing statistical analyses of these studies ignore such sampling scheme by directly correlating imaging phenotypes (called the secondary traits) with genotype. Although it has been well documented in genetic epidemiology that ignoring the case-control sampling scheme can produce highly biased estimates, and subsequently lead to misleading results and suspicious associations, such findings are not well documented in imaging genetics. We use extensive simulations and a large-scale imaging genetic data analysis of the Alzheimer's Disease Neuroimaging Initiative (ADNI) data to evaluate the effects of the case-control sampling scheme on GWAS results based on some standard statistical methods, such as linear regression methods, while comparing it with several advanced statistical methods that appropriately adjust for the case-control sampling scheme. Copyright © 2016 Elsevier Inc. All rights reserved.
Document image improvement for OCR as a classification problem
NASA Astrophysics Data System (ADS)
Summers, Kristen M.
2003-01-01
In support of the goal of automatically selecting methods of enhancing an image to improve the accuracy of OCR on that image, we consider the problem of determining whether to apply each of a set of methods as a supervised classification problem for machine learning. We characterize each image according to a combination of two sets of measures: a set that are intended to reflect the degree of particular types of noise present in documents in a single font of Roman or similar script and a more general set based on connected component statistics. We consider several potential methods of image improvement, each of which constitutes its own 2-class classification problem, according to whether transforming the image with this method improves the accuracy of OCR. In our experiments, the results varied for the different image transformation methods, but the system made the correct choice in 77% of the cases in which the decision affected the OCR score (in the range [0,1]) by at least .01, and it made the correct choice 64% of the time overall.
MINER - A Mobile Imager of Neutrons for Emergency Responders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsmith, John E. M.; Brennan, James S.; Gerling, Mark D
2014-10-01
We have developed a mobile fast neutron imaging platform to enhance the capabilities of emergency responders in the localization and characterization of special nuclear material. This mobile imager of neutrons for emergency responders (MINER) is based on the Neutron Scatter Camera, a large segmented imaging system that was optimized for large-area search applications. Due to the reduced size and power requirements of a man-portable system, MINER has been engineered to fit a much smaller form factor, and to be operated from either a battery or AC power. We chose a design that enabled omnidirectional (4π) imaging, with only a ~twofoldmore » decrease in sensitivity compared to the much larger neutron scatter cameras. The system was designed to optimize its performance for neutron imaging and spectroscopy, but it does also function as a Compton camera for gamma imaging. This document outlines the project activities, broadly characterized as system development, laboratory measurements, and deployments, and presents sample results in these areas. Additional information can be found in the documents that reside in WebPMIS.« less
Mapping DICOM to OpenDocument format
NASA Astrophysics Data System (ADS)
Yu, Cong; Yao, Zhihong
2009-02-01
In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.
Multimedia platform for authoring and presentation of clinical rounds in cardiology
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Lapstra, Lorelle
2003-05-01
We developed a multimedia presentation platform that allows retrieving data from any digital and analog modalities and to prepare a script of a clinical presentation in an XML format. This system was designed for cardiac multi-disciplinary conferences involving different cardiology specialists as well as cardiovascular surgeons. A typical presentation requires preparation of summary reports of data obtained from the different investigations and imaging techniques. An XML-based scripting methodology was developed to allow for preparation of clinical presentations. The image display program uses the generated script for the sequential presentation of different images that are displayed on pre-determined presentation settings. The ability to prepare and present clinical conferences electronically is more efficient and less time consuming than conventional settings using analog and digital documents, films and videotapes. The script of a given presentation can further be saved as part of the patient record for subsequent review of the documents and images that supported a given medical or therapeutic decision. This also constitutes a perfect documentation method for surgeons and physicians responsible of therapeutic procedures that were decided upon during the clinical conference. It allows them to review the relevant data that supported a given therapeutic decision.
Stroke-model-based character extraction from gray-level document images.
Ye, X; Cheriet, M; Suen, C Y
2001-01-01
Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.
Bolliger, Stephan A; Thali, Michael J; Ross, Steffen; Buck, Ursula; Naether, Silvio; Vock, Peter
2008-02-01
The transdisciplinary research project Virtopsy is dedicated to implementing modern imaging techniques into forensic medicine and pathology in order to augment current examination techniques or even to offer alternative methods. Our project relies on three pillars: three-dimensional (3D) surface scanning for the documentation of body surfaces, and both multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) to visualise the internal body. Three-dimensional surface scanning has delivered remarkable results in the past in the 3D documentation of patterned injuries and of objects of forensic interest as well as whole crime scenes. Imaging of the interior of corpses is performed using MSCT and/or MRI. MRI, in addition, is also well suited to the examination of surviving victims of assault, especially choking, and helps visualise internal injuries not seen at external examination of the victim. Apart from the accuracy and three-dimensionality that conventional documentations lack, these techniques allow for the re-examination of the corpse and the crime scene even decades later, after burial of the corpse and liberation of the crime scene. We believe that this virtual, non-invasive or minimally invasive approach will improve forensic medicine in the near future.
All That Remains of Exploded Star
2011-10-24
Infrared images from NASA Spitzer Space Telescope and Wide-field Infrared Survey Explorer are combined in this image of RCW 86, the dusty remains of the oldest documented example of an exploding star, or supernova.
Voigt, Jens-Uwe; Pedrizzetti, Gianni; Lysyansky, Peter; Marwick, Tom H; Houle, Helen; Baumann, Rolf; Pedri, Stefano; Ito, Yasuhiro; Abe, Yasuhiko; Metz, Stephen; Song, Joo Hyun; Hamilton, Jamie; Sengupta, Partho P; Kolias, Theodore J; d'Hooge, Jan; Aurigemma, Gerard P; Thomas, James D; Badano, Luigi Paolo
2015-01-01
Recognizing the critical need for standardization in strain imaging, in 2010, the European Association of Echocardiography (now the European Association of Cardiovascular Imaging, EACVI) and the American Society of Echocardiography (ASE) invited technical representatives from all interested vendors to participate in a concerted effort to reduce intervendor variability of strain measurement. As an initial product of the work of the EACVI/ASE/Industry initiative to standardize deformation imaging, we prepared this technical document which is intended to provide definitions, names, abbreviations, formulas, and procedures for calculation of physical quantities derived from speckle tracking echocardiography and thus create a common standard. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.
OCAMS: The OSIRIS-REx Camera Suite
NASA Astrophysics Data System (ADS)
Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.
2018-02-01
The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.
47 CFR 0.409 - Commission policy on private printing of FCC forms.
Code of Federal Regulations, 2014 CFR
2014-10-01
... in quality to the original document, without change to the page size, image size, configuration of... document.” (4) Do not add to the form any other symbol, word or phrase that might be construed as...
47 CFR 0.409 - Commission policy on private printing of FCC forms.
Code of Federal Regulations, 2012 CFR
2012-10-01
... in quality to the original document, without change to the page size, image size, configuration of... document.” (4) Do not add to the form any other symbol, word or phrase that might be construed as...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-01
... must be submitted electronically in machine-readable format. PDF images created by scanning a paper document may not be submitted, except in cases in which a word- processing version of the document is not...
47 CFR 0.409 - Commission policy on private printing of FCC forms.
Code of Federal Regulations, 2013 CFR
2013-10-01
... in quality to the original document, without change to the page size, image size, configuration of... document.” (4) Do not add to the form any other symbol, word or phrase that might be construed as...
47 CFR 0.409 - Commission policy on private printing of FCC forms.
Code of Federal Regulations, 2011 CFR
2011-10-01
... in quality to the original document, without change to the page size, image size, configuration of... document.” (4) Do not add to the form any other symbol, word or phrase that might be construed as...
Compton Dry-Cask Imaging System
None
2017-12-09
The Compton-Dry Cask Imaging Scanner is a system that verifies and documents the presence of spent nuclear fuel rods in dry-cask storage and determines their isotopic composition without moving or opening the cask. For more information about this project, visit http://www.inl.gov/rd100/2011/compton-dry-cask-imaging-system/
Interactive Digital Image Manipulation System (IDIMS)
NASA Technical Reports Server (NTRS)
Fleming, M. D.
1981-01-01
The implementation of an interactive digital image manipulation system (IDIMS) is described. The system is run on an HP-3000 Series 3 minicomputer. The IDIMS system provides a complete image geoprocessing capability for raster formatted data in a self-contained system. It is easily installed, documentation is provided, and vendor support is available.
10 CFR 2.1011 - Management of electronic information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...
10 CFR 2.1011 - Management of electronic information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2011 CFR
2011-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2012 CFR
2012-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2013 CFR
2013-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2010 CFR
2010-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
17 CFR 232.304 - Graphic, image, audio and video material.
Code of Federal Regulations, 2014 CFR
2014-04-01
... video material. 232.304 Section 232.304 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... Submissions § 232.304 Graphic, image, audio and video material. (a) If a filer includes graphic, image, audio or video material in a document delivered to investors and others that is not reproduced in an...
A method for automatically abstracting visual documents
NASA Technical Reports Server (NTRS)
Rorvig, Mark E.
1994-01-01
Visual documents--motion sequences on film, videotape, and digital recording--constitute a major source of information for the Space Agency, as well as all other government and private sector entities. This article describes a method for automatically selecting key frames from visual documents. These frames may in turn be used to represent the total image sequence of visual documents in visual libraries, hypermedia systems, and training algorithm reduces 51 minutes of video sequences to 134 frames; a reduction of information in the range of 700:1.
2014-01-01
Introduction Fixed orthodontic appliances, despite years of research and development, still raise a lot of controversy because of its potentially destructive influence on enamel. Therefore, it is necessary to quantitatively assess the condition and therein the thickness of tooth enamel in order to select the appropriate orthodontic bonding and debonding methodology as well as to assess the quality of enamel after treatment and clean-up procedure in order to choose the most advantageous course of treatment. One of the assessment methods is computed tomography where the measurement of enamel thickness and the 3D reconstruction of image sequences can be performed fully automatically. Material and method OCT images of 180 teeth were obtained from the Topcon 3D OCT-2000 camera. The images were obtained in vitro by performing sequentially 7 stages of treatment on all the teeth: before any interference into enamel, polishing with orthodontic paste, etching and application of a bonding system, orthodontic bracket bonding, orthodontic bracket removal, cleaning off adhesive residue. A dedicated method for the analysis and processing of images involving median filtering, mathematical morphology, binarization, polynomial approximation and the active contour method has been proposed. Results The obtained results enable automatic measurement of tooth enamel thickness in 5 seconds using the Core i5 CPU M460 @ 2.5GHz 4GB RAM. For one patient, the proposed method of analysis confirms enamel thickness loss of 80 μm (from 730 ± 165 μm to 650 ± 129 μm) after polishing with paste, enamel thickness loss of 435 μm (from 730 ± 165 μm to 295 ± 55 μm) after etching and bonding resin application, growth of a layer having a thickness of 265 μm (from 295 ± 55 μm to 560 ± 98 μm after etching) which is the adhesive system. After removing an orthodontic bracket, the adhesive residue was 105 μm and after cleaning it off, the enamel thickness was 605 μm. The enamel thickness before and after the whole treatment decreased by about 125 μm. Conclusions This paper presents an automatic quantitative method for the assessment of tooth enamel thickness. This method has proven to be an effective diagnostic tool that allows evaluation of the surface and cross section of tooth enamel after orthodontic treatment with fixed thin-arched braces and proper selection of the methodology and course of treatment. PMID:24755213