Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun
2015-01-01
Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.
Information retrieval based on single-pixel optical imaging with quick-response code
NASA Astrophysics Data System (ADS)
Xiao, Yin; Chen, Wen
2018-04-01
Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.
Image scale measurement with correlation filters in a volume holographic optical correlator
NASA Astrophysics Data System (ADS)
Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan
2013-08-01
A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529
Effects of spatial resolution ratio in image fusion
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2008-01-01
In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.
Sharpening of Hierarchical Visual Feature Representations of Blurred Images.
Abdelhack, Mohamed; Kamitani, Yukiyasu
2018-01-01
The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.
NASA Astrophysics Data System (ADS)
Zhu, Likai; Radeloff, Volker C.; Ives, Anthony R.
2017-06-01
Mapping crop types is of great importance for assessing agricultural production, land-use patterns, and the environmental effects of agriculture. Indeed, both radiometric and spatial resolution of Landsat's sensors images are optimized for cropland monitoring. However, accurate mapping of crop types requires frequent cloud-free images during the growing season, which are often not available, and this raises the question of whether Landsat data can be combined with data from other satellites. Here, our goal is to evaluate to what degree fusing Landsat with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data can improve crop-type classification. Choosing either one or two images from all cloud-free Landsat observations available for the Arlington Agricultural Research Station area in Wisconsin from 2010 to 2014, we generated 87 combinations of images, and used each combination as input into the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to predict Landsat-like images at the nominal dates of each 8-day MODIS NBAR product. Both the original Landsat and STARFM-predicted images were then classified with a support vector machine (SVM), and we compared the classification errors of three scenarios: 1) classifying the one or two original Landsat images of each combination only, 2) classifying the one or two original Landsat images plus all STARFM-predicted images, and 3) classifying the one or two original Landsat images together with STARFM-predicted images for key dates. Our results indicated that using two Landsat images as the input of STARFM did not significantly improve the STARFM predictions compared to using only one, and predictions using Landsat images between July and August as input were most accurate. Including all STARFM-predicted images together with the Landsat images significantly increased average classification error by 4% points (from 21% to 25%) compared to using only Landsat images. However, incorporating only STARFM-predicted images for key dates decreased average classification error by 2% points (from 21% to 19%) compared to using only Landsat images. In particular, if only a single Landsat image was available, adding STARFM predictions for key dates significantly decreased the average classification error by 4 percentage points from 30% to 26% (p < 0.05). We conclude that adding STARFM-predicted images can be effective for improving crop-type classification when only limited Landsat observations are available, but carefully selecting images from a full set of STARFM predictions is crucial. We developed an approach to identify the optimal subsets of all STARFM predictions, which gives an alternative method of feature selection for future research.
New false color mapping for image fusion
NASA Astrophysics Data System (ADS)
Toet, Alexander; Walraven, Jan
1996-03-01
A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).
Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure
NASA Astrophysics Data System (ADS)
Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.
2017-05-01
Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2006-01-01
A computer program automatically builds large, full-resolution mosaics of multispectral images of Earth landmasses from images acquired by Landsat 7, complete with matching of colors and blending between adjacent scenes. While the code has been used extensively for Landsat, it could also be used for other data sources. A single mosaic of as many as 8,000 scenes, represented by more than 5 terabytes of data and the largest set produced in this work, demonstrated what the code could do to provide global coverage. The program first statistically analyzes input images to determine areas of coverage and data-value distributions. It then transforms the input images from their original universal transverse Mercator coordinates to other geographical coordinates, with scaling. It applies a first-order polynomial brightness correction to each band in each scene. It uses a data-mask image for selecting data and blending of input scenes. Under control by a user, the program can be made to operate on small parts of the output image space, with check-point and restart capabilities. The program runs on SGI IRIX computers. It is capable of parallel processing using shared-memory code, large memories, and tens of central processing units. It can retrieve input data and store output data at locations remote from the processors on which it is executed.
Synaptic plasticity in a cerebellum-like structure depends on temporal order
NASA Astrophysics Data System (ADS)
Bell, Curtis C.; Han, Victor Z.; Sugawara, Yoshiko; Grant, Kirsty
1997-05-01
Cerebellum-like structures in fish appear to act as adaptive sensory processors, in which learned predictions about sensory input are generated and subtracted from actual sensory input, allowing unpredicted inputs to stand out1-3. Pairing sensory input with centrally originating predictive signals, such as corollary discharge signals linked to motor commands, results in neural responses to the predictive signals alone that are Negative images' of the previously paired sensory responses. Adding these 'negative images' to actual sensory inputs minimizes the neural response to predictable sensory features. At the cellular level, sensory input is relayed to the basal region of Purkinje-like cells, whereas predictive signals are relayed by parallel fibres to the apical dendrites of the same cells4. The generation of negative images could be explained by plasticity at parallel fibre synapses5-7. We show here that such plasticity exists in the electrosensory lobe of mormyrid electric fish and that it has the necessary properties for such a model: it is reversible, anti-hebbian (excitatory postsynaptic potentials (EPSPs) are depressed after pairing with a postsynaptic spike) and tightly dependent on the sequence of pre- and postsynaptic events, with depression occurring only if the postsynaptic spike follows EPSP onset within 60 ms.
Simplifying [18F]GE-179 PET: are both arterial blood sampling and 90-min acquisitions essential?
McGinnity, Colm J; Riaño Barros, Daniela A; Trigg, William; Brooks, David J; Hinz, Rainer; Duncan, John S; Koepp, Matthias J; Hammers, Alexander
2018-06-11
The NMDA receptor radiotracer [ 18 F]GE-179 has been used with 90-min scans and arterial plasma input functions. We explored whether (1) arterial blood sampling is avoidable and (2) shorter scans are feasible. For 20 existing [ 18 F]GE-179 datasets, we generated (1) standardised uptake values (SUVs) over eight intervals; (2) volume of distribution (V T ) images using population-based input functions (PBIFs), scaled using one parent plasma sample; and (3) V T images using three shortened datasets, using the original parent plasma input functions (ppIFs). Correlations with the original ppIF-derived 90-min V T s increased for later interval SUVs (maximal ρ = 0.78; 80-90 min). They were strong for PBIF-derived V T s (ρ = 0.90), but between-subject coefficient of variation increased. Correlations were very strong for the 60/70/80-min original ppIF-derived V T s (ρ = 0.97-1.00), which suffered regionally variant negative bias. Where arterial blood sampling is available, reduction of scan duration to 60 min is feasible, but with negative bias. The performance of SUVs was more consistent across participants than PBIF-derived V T s.
Using virtual data for training deep model for hand gesture recognition
NASA Astrophysics Data System (ADS)
Nikolaev, E. I.; Dvoryaninov, P. V.; Lensky, Y. Y.; Drozdovsky, N. S.
2018-05-01
Deep learning has shown real promise for the classification efficiency for hand gesture recognition problems. In this paper, the authors present experimental results for a deeply-trained model for hand gesture recognition through the use of hand images. The authors have trained two deep convolutional neural networks. The first architecture produces the hand position as a 2D-vector by input hand image. The second one predicts the hand gesture class for the input image. The first proposed architecture produces state of the art results with an accuracy rate of 89% and the second architecture with split input produces accuracy rate of 85.2%. In this paper, the authors also propose using virtual data for training a supervised deep model. Such technique is aimed to avoid using original labelled images in the training process. The interest of this method in data preparation is motivated by the need to overcome one of the main challenges of deep supervised learning: using a copious amount of labelled data during training.
Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou
2017-07-01
In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.
Color preservation for tone reproduction and image enhancement
NASA Astrophysics Data System (ADS)
Hsin, Chengho; Lee, Zong Wei; Lee, Zheng Zhan; Shin, Shaw-Jyh
2014-01-01
Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.
Hop, Skip and Jump: Animation Software.
ERIC Educational Resources Information Center
Eiser, Leslie
1986-01-01
Discusses the features of animation software packages, reviewing eight commercially available programs. Information provided for each program includes name, publisher, current computer(s) required, cost, documentation, input device, import/export capabilities, printing possibilities, what users can originate, types of image manipulation possible,…
Tensor Rank Preserving Discriminant Analysis for Facial Recognition.
Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo
2017-10-12
Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.
Semantic Image Segmentation with Contextual Hierarchical Models.
Seyedhosseini, Mojtaba; Tasdizen, Tolga
2016-05-01
Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).
Single image super-resolution based on approximated Heaviside functions and iterative refinement
Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian
2018-01-01
One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298
Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.
Unsupervised texture image segmentation by improved neural network ART2
NASA Technical Reports Server (NTRS)
Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco
1994-01-01
We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.
Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012. PMID:27281032
Retinal Origin of Direction Selectivity in the Superior Colliculus
Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua
2017-01-01
Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394
Self-amplified optical pattern recognition system
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1994-01-01
A self amplifying optical pattern recognizer includes a geometric system configuration similar to that of a Vander Lugt holographic matched filter configuration with a photorefractive crystal specifically oriented with respect to the input beams. An extraordinarily polarized, spherically converging object image beam is formed by laser illumination of an input object image and applied through a photorefractive crystal, such as a barium titanite (BaTiO.sub.3) crystal. A volume or thin-film dif ORIGIN OF THE INVENTION The invention described herein was made in the performance of work under a NASA contract, and is subject to the provisions of Public Law 96-517 (35 USC 202) in which the Contractor has elected to retain title.
An Image Processing Algorithm Based On FMAT
NASA Technical Reports Server (NTRS)
Wang, Lui; Pal, Sankar K.
1995-01-01
Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.
Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images
NASA Astrophysics Data System (ADS)
Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.
2016-03-01
Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.
High dynamic range algorithm based on HSI color space
NASA Astrophysics Data System (ADS)
Zhang, Jiancheng; Liu, Xiaohua; Dong, Liquan; Zhao, Yuejin; Liu, Ming
2014-10-01
This paper presents a High Dynamic Range algorithm based on HSI color space. To keep hue and saturation of original image and conform to human eye vision effect is the first problem, convert the input image data to HSI color space which include intensity dimensionality. To raise the speed of the algorithm is the second problem, use integral image figure out the average of every pixel intensity value under a certain scale, as local intensity component of the image, and figure out detail intensity component. To adjust the overall image intensity is the third problem, we can get an S type curve according to the original image information, adjust the local intensity component according to the S type curve. To enhance detail information is the fourth problem, adjust the detail intensity component according to the curve designed in advance. The weighted sum of local intensity component after adjusted and detail intensity component after adjusted is final intensity. Converting synthetic intensity and other two dimensionality to output color space can get final processed image.
Real-Time Fourier Transformed Holographic Associative Memory With Photorefractive Material
NASA Astrophysics Data System (ADS)
Changsuk, Oh; Hankyu, Park
1989-02-01
We describe a volume holographic associative memory using photorefractive material and conventional planar mirror. Multiple hologram is generated with two angular multiplexed writing beams and Fourier transformed object beam in BaTiO3 crystal at 0.6328 μm. Complete image can be recalled successfully by partial input of original stored image. It is proved that our system is useful for optical implementation of real-time associative memory and location addressable memory.
Fusion of infrared and visible images based on saliency scale-space in frequency domain
NASA Astrophysics Data System (ADS)
Chen, Yanfei; Sang, Nong; Dan, Zhiping
2015-12-01
A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.
Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains
NASA Astrophysics Data System (ADS)
Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao
2017-11-01
A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.
Guided filtering for solar image/video processing
NASA Astrophysics Data System (ADS)
Xu, Long; Yan, Yihua; Cheng, Jun
2017-06-01
A new image enhancement algorithm employing guided filtering is proposed in this work for the enhancement of solar images and videos so that users can easily figure out important fine structures embedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determination of interesting solar burst activities from recorded images/movies.
Bas-relief generation using adaptive histogram equalization.
Sun, Xianfang; Rosin, Paul L; Martin, Ralph R; Langbein, Frank C
2009-01-01
An algorithm is presented to automatically generate bas-reliefs based on adaptive histogram equalization (AHE), starting from an input height field. A mesh model may alternatively be provided, in which case a height field is first created via orthogonal or perspective projection. The height field is regularly gridded and treated as an image, enabling a modified AHE method to be used to generate a bas-relief with a user-chosen height range. We modify the original image-contrast-enhancement AHE method to use gradient weights also to enhance the shape features of the bas-relief. To effectively compress the height field, we limit the height-dependent scaling factors used to compute relative height variations in the output from height variations in the input; this prevents any height differences from having too great effect. Results of AHE over different neighborhood sizes are averaged to preserve information at different scales in the resulting bas-relief. Compared to previous approaches, the proposed algorithm is simple and yet largely preserves original shape features. Experiments show that our results are, in general, comparable to and in some cases better than the best previously published methods.
Cooper, Emily A.; Norcia, Anthony M.
2015-01-01
The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video
NASA Astrophysics Data System (ADS)
Li, Honggui
2017-09-01
This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.
Color transfer method preserving perceived lightness
NASA Astrophysics Data System (ADS)
Ueda, Chiaki; Azetsu, Tadahiro; Suetake, Noriaki; Uchino, Eiji
2016-06-01
Color transfer originally proposed by Reinhard et al. is a method to change the color appearance of an input image by using the color information of a reference image. The purpose of this study is to modify color transfer so that it works well even when the scenes of the input and reference images are not similar. Concretely, a color transfer method with lightness correction and color gamut adjustment is proposed. The lightness correction is applied to preserve the perceived lightness which is explained by the Helmholtz-Kohlrausch (H-K) effect. This effect is the phenomenon that vivid colors are perceived as brighter than dull colors with the same lightness. Hence, when the chroma is changed by image processing, the perceived lightness is also changed even if the physical lightness is preserved after the image processing. In the proposed method, by considering the H-K effect, color transfer that preserves the perceived lightness after processing is realized. Furthermore, color gamut adjustment is introduced to address the color gamut problem, which is caused by color space conversion. The effectiveness of the proposed method is verified by performing some experiments.
Pixel-based image fusion with false color mapping
NASA Astrophysics Data System (ADS)
Zhao, Wei; Mao, Shiyi
2003-06-01
In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.
2004-11-01
affords exciting opportunities in target detection. The input signal may be a sum of sine waves, it could be an auditory signal, or possibly a visual...rendering of a scene. Since image processing is an area in which the original data are stationary in some sense ( auditory signals suffer from...11 Example 1 of SR - Identification of a Subliminal Signal below a Threshold .......................... 13 Example 2 of SR
Material appearance acquisition from a single image
NASA Astrophysics Data System (ADS)
Zhang, Xu; Cui, Shulin; Cui, Hanwen; Yang, Lin; Wu, Tao
2017-01-01
The scope of this paper is to present a method of material appearance acquisition(MAA) from a single image. In this paper, material appearance is represented by spatially varying bidirectional reflectance distribution function(SVBRDF). Therefore, MAA can be reduced to the problem of recovery of each pixel's BRDF parameters from an original input image, which include diffuse coefficient, specular coefficient, normal and glossiness based on the Blinn-Phone model. In our method, the workflow of MAA includes five main phases: highlight removal, estimation of intrinsic images, shape from shading(SFS), initialization of glossiness and refining SVBRDF parameters based on IPOPT. The results indicate that the proposed technique can effectively extract the material appearance from a single image.
Fast Fourier transform-based Retinex and alpha-rooting color image enhancement
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; Agaian, Sos S.; Gonzales, Analysa M.
2015-05-01
Efficiency in terms of both accuracy and speed is highly important in any system, especially when it comes to image processing. The purpose of this paper is to improve an existing implementation of multi-scale retinex (MSR) by utilizing the fast Fourier transforms (FFT) within the illumination estimation step of the algorithm to improve the speed at which Gaussian blurring filters were applied to the original input image. In addition, alpha-rooting can be used as a separate technique to achieve a sharper image in order to fuse its results with those of the retinex algorithm for the sake of achieving the best image possible as shown by the values of the considered color image enhancement measure (EMEC).
Self-Organizing-Map Program for Analyzing Multivariate Data
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Jacob, Joseph C.; Block, Gary L.; Braverman, Amy J.
2005-01-01
SOM_VIS is a computer program for analysis and display of multidimensional sets of Earth-image data typified by the data acquired by the Multi-angle Imaging Spectro-Radiometer [MISR (a spaceborne instrument)]. In SOM_VIS, an enhanced self-organizing-map (SOM) algorithm is first used to project a multidimensional set of data into a nonuniform three-dimensional lattice structure. The lattice structure is mapped to a color space to obtain a color map for an image. The Voronoi cell-refinement algorithm is used to map the SOM lattice structure to various levels of color resolution. The final result is a false-color image in which similar colors represent similar characteristics across all its data dimensions. SOM_VIS provides a control panel for selection of a subset of suitably preprocessed MISR radiance data, and a control panel for choosing parameters to run SOM training. SOM_VIS also includes a component for displaying the false-color SOM image, a color map for the trained SOM lattice, a plot showing an original input vector in 36 dimensions of a selected pixel from the SOM image, the SOM vector that represents the input vector, and the Euclidean distance between the two vectors.
Dictionary learning based noisy image super-resolution via distance penalty weight model
Han, Yulan; Zhao, Yongping; Wang, Qisong
2017-01-01
In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness. PMID:28759633
PySE: Python Source Extractor for radio astronomical images
NASA Astrophysics Data System (ADS)
Spreeuw, Hanno; Swinbank, John; Molenaar, Gijs; Staley, Tim; Rol, Evert; Sanders, John; Scheers, Bart; Kuiack, Mark
2018-05-01
PySE finds and measures sources in radio telescope images. It is run with several options, such as the detection threshold (a multiple of the local noise), grid size, and the forced clean beam fit, followed by a list of input image files in standard FITS or CASA format. From these, PySe provides a list of found sources; information such as the calculated background image, source list in different formats (e.g. text, region files importable in DS9), and other data may be saved. PySe can be integrated into a pipeline; it was originally written as part of the LOFAR Transient Detection Pipeline (TraP, ascl:1412.011).
Enhanced image fusion using directional contrast rules in fuzzy transform domain.
Nandal, Amita; Rosales, Hamurabi Gamboa
2016-01-01
In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.
Description of textures by a structural analysis.
Tomita, F; Shirai, Y; Tsuji, S
1982-02-01
A structural analysis system for describing natural textures is introduced. The analyzer automatically extracts the texture elements in an input image, measures their properties, classifies them into some distinctive classes (one ``ground'' class and some ``figure'' classes), and computes the distributions of the gray level, the shape, and the placement of the texture elements in each class. These descriptions are used for classification of texture images. An analysis-by-synthesis method for evaluating texture analyzers is also presented. We propose a synthesizer which generates a texture image based on the descriptions. By comparing the reconstructed image with the original one, we can see what information is preserved and what is lost in the descriptions.
Reduction of metal artifacts in x-ray CT images using a convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Yanbo; Chu, Ying; Yu, Hengyong
2017-09-01
Patients usually contain various metallic implants (e.g. dental fillings, prostheses), causing severe artifacts in the x-ray CT images. Although a large number of metal artifact reduction (MAR) methods have been proposed in the past four decades, MAR is still one of the major problems in clinical x-ray CT. In this work, we develop a convolutional neural network (CNN) based MAR framework, which combines the information from the original and corrected images to suppress artifacts. Before the MAR, we generate a group of data and train a CNN. First, we numerically simulate various metal artifacts cases and build a dataset, which includes metal-free images (used as references), metal-inserted images and various MAR methods corrected images. Then, ten thousands patches are extracted from the databased to train the metal artifact reduction CNN. In the MAR stage, the original image and two corrected images are stacked as a three-channel input image for CNN, and a CNN image is generated with less artifacts. The water equivalent regions in the CNN image are set to a uniform value to yield a CNN prior, whose forward projections are used to replace the metal affected projections, followed by the FBP reconstruction. Experimental results demonstrate the superior metal artifact reduction capability of the proposed method to its competitors.
Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam
2017-10-01
A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.
Attention model of binocular rivalry
Rankin, James; Rinzel, John; Carrasco, Marisa; Heeger, David J.
2017-01-01
When the corresponding retinal locations in the two eyes are presented with incompatible images, a stable percept gives way to perceptual alternations in which the two images compete for perceptual dominance. As perceptual experience evolves dynamically under constant external inputs, binocular rivalry has been used for studying intrinsic cortical computations and for understanding how the brain regulates competing inputs. Converging behavioral and EEG results have shown that binocular rivalry and attention are intertwined: binocular rivalry ceases when attention is diverted away from the rivalry stimuli. In addition, the competing image in one eye suppresses the target in the other eye through a pattern of gain changes similar to those induced by attention. These results require a revision of the current computational theories of binocular rivalry, in which the role of attention is ignored. Here, we provide a computational model of binocular rivalry. In the model, competition between two images in rivalry is driven by both attentional modulation and mutual inhibition, which have distinct selectivity (feature vs. eye of origin) and dynamics (relatively slow vs. relatively fast). The proposed model explains a wide range of phenomena reported in rivalry, including the three hallmarks: (i) binocular rivalry requires attention; (ii) various perceptual states emerge when the two images are swapped between the eyes multiple times per second; (iii) the dominance duration as a function of input strength follows Levelt’s propositions. With a bifurcation analysis, we identified the parameter space in which the model’s behavior was consistent with experimental results. PMID:28696323
Motion video compression system with neural network having winner-take-all function
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)
1997-01-01
A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.
Thalamic input to auditory cortex is locally heterogeneous but globally tonotopic
Vasquez-Lopez, Sebastian A; Weissenberger, Yves; Lohse, Michael; Keating, Peter; King, Andrew J
2017-01-01
Topographic representation of the receptor surface is a fundamental feature of sensory cortical organization. This is imparted by the thalamus, which relays information from the periphery to the cortex. To better understand the rules governing thalamocortical connectivity and the origin of cortical maps, we used in vivo two-photon calcium imaging to characterize the properties of thalamic axons innervating different layers of mouse auditory cortex. Although tonotopically organized at a global level, we found that the frequency selectivity of individual thalamocortical axons is surprisingly heterogeneous, even in layers 3b/4 of the primary cortical areas, where the thalamic input is dominated by the lemniscal projection. We also show that thalamocortical input to layer 1 includes collaterals from axons innervating layers 3b/4 and is largely in register with the main input targeting those layers. Such locally varied thalamocortical projections may be useful in enabling rapid contextual modulation of cortical frequency representations. PMID:28891466
Binary logic based purely on Fresnel diffraction
NASA Astrophysics Data System (ADS)
Hamam, H.; de Bougrenet de La Tocnaye, J. L.
1995-09-01
Binary logic operations on two-dimensional data arrays are achieved by use of the self-imaging properties of Fresnel diffraction. The fields diffracted by periodic objects can be considered as the superimposition of weighted and shifted replicas of original objects. We show that a particular spatial organization of the input data can result in logical operations being performed on these data in the considered diffraction planes. Among various advantages, this approach is shown to allow the implementation of dual-track, nondissipative logical operators. Image algebra is presented as an experimental illustration of this principle.
Support Routines for In Situ Image Processing
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean
2013-01-01
This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.
Authenticity preservation with histogram-based reversible data hiding and quadtree concepts.
Huang, Hsiang-Cheh; Fang, Wai-Chi
2011-01-01
With the widespread use of identification systems, establishing authenticity with sensors has become an important research issue. Among the schemes for making authenticity verification based on information security possible, reversible data hiding has attracted much attention during the past few years. With its characteristics of reversibility, the scheme is required to fulfill the goals from two aspects. On the one hand, at the encoder, the secret information needs to be embedded into the original image by some algorithms, such that the output image will resemble the input one as much as possible. On the other hand, at the decoder, both the secret information and the original image must be correctly extracted and recovered, and they should be identical to their embedding counterparts. Under the requirement of reversibility, for evaluating the performance of the data hiding algorithm, the output image quality, named imperceptibility, and the number of bits for embedding, called capacity, are the two key factors to access the effectiveness of the algorithm. Besides, the size of side information for making decoding possible should also be evaluated. Here we consider using the characteristics of original images for developing our method with better performance. In this paper, we propose an algorithm that has the ability to provide more capacity than conventional algorithms, with similar output image quality after embedding, and comparable side information produced. Simulation results demonstrate the applicability and better performance of our algorithm.
The role of figure-ground segregation in change blindness.
Landman, Rogier; Spekreijse, Henk; Lamme, Victor A F
2004-04-01
Partial report methods have shown that a large-capacity representation exists for a few hundred milliseconds after a picture has disappeared. However, change blindness studies indicate that very limited information remains available when a changed version of the image is presented subsequently. What happens to the large-capacity representation? New input after the first image may interfere, but this is likely to depend on the characteristics of the new input. In our first experiment, we show that a display containing homogeneous image elements between changing images does not render the large-capacity representation unavailable. Interference occurs when these new elements define objects. On that basis we introduce a new method to produce change blindness: The second experiment shows that change blindness can be induced by redefining figure and background, without an interval between the displays. The local features (line segments) that defined figures and background were swapped, while the contours of the figures remained where they were. Normally, changes are easily detected when there is no interval. However, our paradigm results in massive change blindness. We propose that in a change blindness experiment, there is a large-capacity representation of the original image when it is followed by a homogeneous interval display, but that change blindness occurs whenever the changed image forces resegregation of figures from the background.
Maximising information recovery from rank-order codes
NASA Astrophysics Data System (ADS)
Sen, B.; Furber, S.
2007-04-01
The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.
Yao, Jincao; Yu, Huimin; Hu, Roland
2017-01-01
This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.
Color pictorial serpentine halftone for secure embedded data
NASA Astrophysics Data System (ADS)
Curry, Douglas N.
1998-04-01
This paper introduces a new rotatable glyph shape for trusted printing applications that has excellent image rendering, data storage and counterfeit deterrence properties. Referred to as a serpentine because it tiles into a meandering line screen, it can produce high quality images independent of its ability to embed data. The hafltone cell is constructed with hyperbolic curves to enhance its dynamic range, and generates low distortion because of rotational tone invariance with its neighbors. An extension to the process allows the data to be formatted into human readable text patterns, viewable with a magnifying glass, and therefore not requiring input scanning. The resultant embedded halftone patterns can be recognized as simple numbers (0 - 9) or alphanumerics (a - z). The pattern intensity can be offset from the surrounding image field intensity, producing a watermarking effect. We have been able to embed words such as 'original' or license numbers into the background halftone pattern of images which can be readily observed in the original image, and which conveniently disappear upon copying. We have also embedded data blocks with self-clocking codes and error correction data which are machine-readable. Finally, we have successfully printed full color images with both the embedded data and text, simulating a trusted printing application.
Histogram Matching Extends Acceptable Signal Strength Range on Optical Coherence Tomography Images
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Sigal, Ian A.; Kagemann, Larry; Schuman, Joel S.
2015-01-01
Purpose. We minimized the influence of image quality variability, as measured by signal strength (SS), on optical coherence tomography (OCT) thickness measurements using the histogram matching (HM) method. Methods. We scanned 12 eyes from 12 healthy subjects with the Cirrus HD-OCT device to obtain a series of OCT images with a wide range of SS (maximal range, 1–10) at the same visit. For each eye, the histogram of an image with the highest SS (best image quality) was set as the reference. We applied HM to the images with lower SS by shaping the input histogram into the reference histogram. Retinal nerve fiber layer (RNFL) thickness was automatically measured before and after HM processing (defined as original and HM measurements), and compared to the device output (device measurements). Nonlinear mixed effects models were used to analyze the relationship between RNFL thickness and SS. In addition, the lowest tolerable SSs, which gave the RNFL thickness within the variability margin of manufacturer recommended SS range (6–10), were determined for device, original, and HM measurements. Results. The HM measurements showed less variability across a wide range of image quality than the original and device measurements (slope = 1.17 vs. 4.89 and 1.72 μm/SS, respectively). The lowest tolerable SS was successfully reduced to 4.5 after HM processing. Conclusions. The HM method successfully extended the acceptable SS range on OCT images. This would qualify more OCT images with low SS for clinical assessment, broadening the OCT application to a wider range of subjects. PMID:26066749
Image enhancement by non-linear extrapolation in frequency space
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Greenspan, Hayit K. (Inventor)
1998-01-01
An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures.
Multiple image encryption scheme based on pixel exchange operation and vector decomposition
NASA Astrophysics Data System (ADS)
Xiong, Y.; Quan, C.; Tay, C. J.
2018-02-01
We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.
NASA Astrophysics Data System (ADS)
Mabu, Shingo; Kido, Shoji; Hashimoto, Noriaki; Hirano, Yasushi; Kuremoto, Takashi
2018-02-01
This research proposes a multi-channel deep convolutional neural network (DCNN) for computer-aided diagnosis (CAD) that classifies normal and abnormal opacities of diffuse lung diseases in Computed Tomography (CT) images. Because CT images are gray scale, DCNN usually uses one channel for inputting image data. On the other hand, this research uses multi-channel DCNN where each channel corresponds to the original raw image or the images transformed by some preprocessing techniques. In fact, the information obtained only from raw images is limited and some conventional research suggested that preprocessing of images contributes to improving the classification accuracy. Thus, the combination of the original and preprocessed images is expected to show higher accuracy. The proposed method realizes region of interest (ROI)-based opacity annotation. We used lung CT images taken in Yamaguchi University Hospital, Japan, and they are divided into 32 × 32 ROI images. The ROIs contain six kinds of opacities: consolidation, ground-glass opacity (GGO), emphysema, honeycombing, nodular, and normal. The aim of the proposed method is to classify each ROI into one of the six opacities (classes). The DCNN structure is based on VGG network that secured the first and second places in ImageNet ILSVRC-2014. From the experimental results, the classification accuracy of the proposed method was better than the conventional method with single channel, and there was a significant difference between them.
Prospects for Classifying Complex Imagery Using a Self-Organizing Neural Network
1989-01-11
complex imagery. In his original re- port, Fukushima demonstrated that this system could discriminate between simple alphabetical characters...on a VAX-8600 minicomputer. Wire frame models of three different vehicles were used to test the properties which Fukushima had demonstrated. The...Table No. Page 3-1 Parameters for Training on Three Input Images 14 3-2 Trained Results 17 vn 1. INTRODUCTION The Neocognitron of Fukushima [2
Lymphoma diagnosis in histopathology using a multi-stage visual learning approach
NASA Astrophysics Data System (ADS)
Codella, Noel; Moradi, Mehdi; Matasar, Matt; Sveda-Mahmood, Tanveer; Smith, John R.
2016-03-01
This work evaluates the performance of a multi-stage image enhancement, segmentation, and classification approach for lymphoma recognition in hematoxylin and eosin (H and E) stained histopathology slides of excised human lymph node tissue. In the first stage, the original histology slide undergoes various image enhancement and segmentation operations, creating an additional 5 images for every slide. These new images emphasize unique aspects of the original slide, including dominant staining, staining segmentations, non-cellular groupings, and cellular groupings. For the resulting 6 total images, a collection of visual features are extracted from 3 different spatial configurations. Visual features include the first fully connected layer (4096 dimensions) of the Caffe convolutional neural network trained from ImageNet data. In total, over 200 resultant visual descriptors are extracted for each slide. Non-linear SVMs are trained over each of the over 200 descriptors, which are then input to a forward stepwise ensemble selection that optimizes a late fusion sum of logistically normalized model outputs using local hill climbing. The approach is evaluated on a public NIH dataset containing 374 images representing 3 lymphoma conditions: chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). Results demonstrate a 38.4% reduction in residual error over the current state-of-art on this dataset.
Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki
2016-01-01
Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.
Programmable remapper for image processing
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)
1991-01-01
A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.
Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data.
Gu, Ke; Tao, Dacheng; Qiao, Jun-Fei; Lin, Weisi
2018-04-01
In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.
A survey of camera error sources in machine vision systems
NASA Astrophysics Data System (ADS)
Jatko, W. B.
In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.
Traffic analysis and control using image processing
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.
2017-11-01
This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Zheng, Bin; Huang, Xia; Qian, Wei
2017-03-01
Deep learning is a trending promising method in medical image analysis area, but how to efficiently prepare the input image for the deep learning algorithms remains a challenge. In this paper, we introduced a novel artificial multichannel region of interest (ROI) generation procedure for convolutional neural networks (CNN). From LIDC database, we collected 54880 benign nodule samples and 59848 malignant nodule samples based on the radiologists' annotations. The proposed CNN consists of three pairs of convolutional layers and two fully connected layers. For each original ROI, two new ROIs were generated: one contains the segmented nodule which highlighted the nodule shape, and the other one contains the gradient of the original ROI which highlighted the textures. By combining the three channel images into a pseudo color ROI, the CNN was trained and tested on the new multichannel ROIs (multichannel ROI II). For the comparison, we generated another type of multichannel image by replacing the gradient image channel with a ROI contains whitened background region (multichannel ROI I). With the 5-fold cross validation evaluation method, the CNN using multichannel ROI II achieved the ROI based area under the curve (AUC) of 0.8823+/-0.0177, compared to the AUC of 0.8484+/-0.0204 generated by the original ROI. By calculating the average of ROI scores from one nodule, the lesion based AUC using multichannel ROI was 0.8793+/-0.0210. By comparing the convolved features maps from CNN using different types of ROIs, it can be noted that multichannel ROI II contains more accurate nodule shapes and surrounding textures.
PIV-DCNN: cascaded deep convolutional neural networks for particle image velocimetry
NASA Astrophysics Data System (ADS)
Lee, Yong; Yang, Hua; Yin, Zhouping
2017-12-01
Velocity estimation (extracting the displacement vector information) from the particle image pairs is of critical importance for particle image velocimetry. This problem is mostly transformed into finding the sub-pixel peak in a correlation map. To address the original displacement extraction problem, we propose a different evaluation scheme (PIV-DCNN) with four-level regression deep convolutional neural networks. At each level, the networks are trained to predict a vector from two input image patches. The low-level network is skilled at large displacement estimation and the high- level networks are devoted to improving the accuracy. Outlier replacement and symmetric window offset operation glue the well- functioning networks in a cascaded manner. Through comparison with the standard PIV methods (one-pass cross-correlation method, three-pass window deformation), the practicability of the proposed PIV-DCNN is verified by the application to a diversity of synthetic and experimental PIV images.
Imaging light responses of foveal ganglion cells in the living macaque eye.
Yin, Lu; Masella, Benjamin; Dalkara, Deniz; Zhang, Jie; Flannery, John G; Schaffer, David V; Williams, David R; Merigan, William H
2014-05-07
The fovea dominates primate vision, and its anatomy and perceptual abilities are well studied, but its physiology has been little explored because of limitations of current physiological methods. In this study, we adapted a novel in vivo imaging method, originally developed in mouse retina, to explore foveal physiology in the macaque, which permits the repeated imaging of the functional response of many retinal ganglion cells (RGCs) simultaneously. A genetically encoded calcium indicator, G-CaMP5, was inserted into foveal RGCs, followed by calcium imaging of the displacement of foveal RGCs from their receptive fields, and their intensity-response functions. The spatial offset of foveal RGCs from their cone inputs makes this method especially appropriate for fovea by permitting imaging of RGC responses without excessive light adaptation of cones. This new method will permit the tracking of visual development, progression of retinal disease, or therapeutic interventions, such as insertion of visual prostheses.
An important step forward in continuous spectroscopic imaging of ionising radiations using ASICs
NASA Astrophysics Data System (ADS)
Fessler, P.; Coffin, J.; Eberlé, H.; de Raad Iseli, C.; Hilt, B.; Huss, D.; Krummenacher, F.; Lutz, J. R.; Prévot, G.; Renouprez, A.; Sigward, M. H.; Schwaller, B.; Voltolini, C.
1999-01-01
Characterization results are given for an original ASIC allowing continuous acquisition of ionising radiation images in spectroscopic mode. Ionising radiation imaging in general and spectroscopic imaging in particular must primarily be guided by the attempt to decrease statistical noise, which requires detection systems designed to allow very high counting rates. Any source of dead time must therefore be avoided. Thus, the use of on-line corrections of the inevitable dispersion of characteristics between the large number of electronic channels of the detection system, shall be precluded. Without claiming to achieve ultimate noise levels, the work described is focused on how to prevent good individual acquisition channel noise performance from being totally destroyed by the dispersion between channels without introducing dead times. With this goal, we developed an automatic charge amplifier output voltage offset compensation system which operates regardless of the cause of the offset (detector or electronic). The main performances of the system are the following: the input equivalent noise charge is 190 e rms (input non connected, peaking time 500 ns), the highest gain is 255 mV/fC, the peaking time is adjustable between 200 ns and 2 μs and the power consumption is 10 mW per channel. The agreement between experimental data and theoretical simulation results is excellent.
Supervised interpretation of echocardiograms with a psychological model of expert supervision
NASA Astrophysics Data System (ADS)
Revankar, Shriram V.; Sher, David B.; Shalin, Valerie L.; Ramamurthy, Maya
1993-07-01
We have developed a collaborative scheme that facilitates active human supervision of the binary segmentation of an echocardiogram. The scheme complements the reliability of a human expert with the precision of segmentation algorithms. In the developed system, an expert user compares the computer generated segmentation with the original image in a user friendly graphics environment, and interactively indicates the incorrectly classified regions either by pointing or by circling. The precise boundaries of the indicated regions are computed by studying original image properties at that region, and a human visual attention distribution map obtained from the published psychological and psychophysical research. We use the developed system to extract contours of heart chambers from a sequence of two dimensional echocardiograms. We are currently extending this method to incorporate a richer set of inputs from the human supervisor, to facilitate multi-classification of image regions depending on their functionality. We are integrating into our system the knowledge related constraints that cardiologists use, to improve the capabilities of our existing system. This extension involves developing a psychological model of expert reasoning, functional and relational models of typical views in echocardiograms, and corresponding interface modifications to map the suggested actions to image processing algorithms.
Lightness modification of color image for protanopia and deuteranopia
NASA Astrophysics Data System (ADS)
Tanaka, Go; Suetake, Noriaki; Uchino, Eiji
2010-01-01
In multimedia content, colors play important roles in conveying visual information. However, color information cannot always be perceived uniformly by all people. People with a color vision deficiency, such as dichromacy, cannot recognize and distinguish certain color combinations. In this paper, an effective lightness modification method, which enables barrier-free color vision for people with dichromacy, especially protanopia or deuteranopia, while preserving the color information in the original image for people with standard color vision, is proposed. In the proposed method, an optimization problem concerning lightness components is first defined by considering color differences in an input image. Then a perceptible and comprehensible color image for both protanopes and viewers with no color vision deficiency or both deuteranopes and viewers with no color vision deficiency is obtained by solving the optimization problem. Through experiments, the effectiveness of the proposed method is illustrated.
3-D ultrasound volume reconstruction using the direct frame interpolation method.
Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin
2010-11-01
A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.
Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image
NASA Astrophysics Data System (ADS)
Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.
2016-03-01
A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.
Global image registration using a symmetric block-matching approach
Modat, Marc; Cash, David M.; Daga, Pankaj; Winston, Gavin P.; Duncan, John S.; Ourselin, Sébastien
2014-01-01
Abstract. Most medical image registration algorithms suffer from a directionality bias that has been shown to largely impact subsequent analyses. Several approaches have been proposed in the literature to address this bias in the context of nonlinear registration, but little work has been done for global registration. We propose a symmetric approach based on a block-matching technique and least-trimmed square regression. The proposed method is suitable for multimodal registration and is robust to outliers in the input images. The symmetric framework is compared with the original asymmetric block-matching technique and is shown to outperform it in terms of accuracy and robustness. The methodology presented in this article has been made available to the community as part of the NiftyReg open-source package. PMID:26158035
Color image fusion for concealed weapon detection
NASA Astrophysics Data System (ADS)
Toet, Alexander
2003-09-01
Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the non-literal nature of these images. Especially for dynamic crowd surveillance purposes it may be impossible to rapidly asses with certainty which individual in the crowd is the one carrying the observed weapon. Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image. The result is a natural looking color image that fluently combines all details from both input sources. When an observer who performs a dynamic crowd surveillance task, detects a weapon in the scene, he will also be able to quickly determine which person in the crowd is actually carrying the observed weapon (e.g. "the man with the red T-shirt and blue jeans"). The method is illustrated by the fusion of thermal 8-12 μm imagery with standard RGB color images.
Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao
2017-10-01
In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
Dixit, Sudeepa; Fox, Mark; Pal, Anupam
2014-01-01
Magnetic resonance imaging (MRI) has advantages for the assessment of gastrointestinal structures and functions; however, processing MRI data is time consuming and this has limited uptake to a few specialist centers. This study introduces a semiautomatic image processing system for rapid analysis of gastrointestinal MRI. For assessment of simpler regions of interest (ROI) such as the stomach, the system generates virtual images along arbitrary planes that intersect the ROI edges in the original images. This generates seed points that are joined automatically to form contours on each adjacent two-dimensional image and reconstructed in three dimensions (3D). An alternative thresholding approach is available for rapid assessment of complex structures like the small intestine. For assessment of dynamic gastrointestinal function, such as gastric accommodation and emptying, the initial 3D reconstruction is used as reference to process adjacent image stacks automatically. This generates four-dimensional (4D) reconstructions of dynamic volume change over time. Compared with manual processing, this semiautomatic system reduced the user input required to analyze a MRI gastric emptying study (estimated 100 vs. 10,000 mouse clicks). This analysis was not subject to variation in volume measurements seen between three human observers. In conclusion, the image processing platform presented processed large volumes of MRI data, such as that produced by gastric accommodation and emptying studies, with minimal user input. 3D and 4D reconstructions of the stomach and, potentially, other gastrointestinal organs are produced faster and more accurately than manual methods. This system will facilitate the application of MRI in gastrointestinal research and clinical practice. PMID:25540229
Fractal-Based Image Analysis In Radiological Applications
NASA Astrophysics Data System (ADS)
Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.
1987-10-01
We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.
Schroder, Kai; Zinke, Arno; Klein, Reinhard
2015-02-01
Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.
Wu, Miao; Yan, Chuanbo; Liu, Huiqiang; Liu, Qian
2018-06-29
Ovarian cancer is one of the most common gynecologic malignancies. Accurate classification of ovarian cancer types (serous carcinoma, mucous carcinoma, endometrioid carcinoma, transparent cell carcinoma) is an essential part in the different diagnosis. Computer-aided diagnosis (CADx) can provide useful advice for pathologists to determine the diagnosis correctly. In our study, we employed a Deep Convolutional Neural Networks (DCNN) based on AlexNet to automatically classify the different types of ovarian cancers from cytological images. The DCNN consists of five convolutional layers, three max pooling layers, and two full reconnect layers. Then we trained the model by two group input data separately, one was original image data and the other one was augmented image data including image enhancement and image rotation. The testing results are obtained by the method of 10-fold cross-validation, showing that the accuracy of classification models has been improved from 72.76 to 78.20% by using augmented images as training data. The developed scheme was useful for classifying ovarian cancers from cytological images. © 2018 The Author(s).
Contractor, Kaiyumars B; Kenny, Laura M; Coombes, Charles R; Turkheimer, Federico E; Aboagye, Eric O; Rosso, Lula
2012-03-24
Quantification of kinetic parameters of positron emission tomography (PET) imaging agents normally requires collecting arterial blood samples which is inconvenient for patients and difficult to implement in routine clinical practice. The aim of this study was to investigate whether a population-based input function (POP-IF) reliant on only a few individual discrete samples allows accurate estimates of tumour proliferation using [18F]fluorothymidine (FLT). Thirty-six historical FLT-PET data with concurrent arterial sampling were available for this study. A population average of baseline scans blood data was constructed using leave-one-out cross-validation for each scan and used in conjunction with individual blood samples. Three limited sampling protocols were investigated including, respectively, only seven (POP-IF7), five (POP-IF5) and three (POP-IF3) discrete samples of the historical dataset. Additionally, using the three-point protocol, we derived a POP-IF3M, the only input function which was not corrected for the fraction of radiolabelled metabolites present in blood. The kinetic parameter for net FLT retention at steady state, Ki, was derived using the modified Patlak plot and compared with the original full arterial set for validation. Small percentage differences in the area under the curve between all the POP-IFs and full arterial sampling IF was found over 60 min (4.2%-5.7%), while there were, as expected, larger differences in the peak position and peak height.A high correlation between Ki values calculated using the original arterial input function and all the population-derived IFs was observed (R2 = 0.85-0.98). The population-based input showed good intra-subject reproducibility of Ki values (R2 = 0.81-0.94) and good correlation (R2 = 0.60-0.85) with Ki-67. Input functions generated using these simplified protocols over scan duration of 60 min estimate net PET-FLT retention with reasonable accuracy.
Fast template matching with polynomials.
Omachi, Shinichiro; Omachi, Masako
2007-08-01
Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Gao, Bing
2017-11-01
The integration of an edge-preserving filtering technique in the classification of a hyperspectral image (HSI) has been proven effective in enhancing classification performance. This paper proposes an ensemble strategy for HSI classification using an edge-preserving filter along with a deep learning model and edge detection. First, an adaptive guided filter is applied to the original HSI to reduce the noise in degraded images and to extract powerful spectral-spatial features. Second, the extracted features are fed as input to a stacked sparse autoencoder to adaptively exploit more invariant and deep feature representations; then, a random forest classifier is applied to fine-tune the entire pretrained network and determine the classification output. Third, a Prewitt compass operator is further performed on the HSI to extract the edges of the first principal component after dimension reduction. Moreover, the regional growth rule is applied to the resulting edge logical image to determine the local region for each unlabeled pixel. Finally, the categories of the corresponding neighborhood samples are determined in the original classification map; then, the major voting mechanism is implemented to generate the final output. Extensive experiments proved that the proposed method achieves competitive performance compared with several traditional approaches.
SU-E-T-04: 3D Dose Based Patient Compensator QA Procedure for Proton Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, W; Reyhan, M; Zhang, M
2015-06-15
Purpose: In proton double-scattering radiotherapy, compensators are the essential patient specific devices to contour the distal dose distribution to the tumor target. Traditional compensator QA is limited to checking the drilled surface profiles against the plan. In our work, a compensator QA process was established that assess the entire compensator including its internal structure for patient 3D dose verification. Methods: The fabricated patient compensators were CT scanned. Through mathematical image processing and geometric transformations, the CT images of the proton compensator were combined with the patient simulation CT images into a new series of CT images, in which the imagedmore » compensator is placed at the planned location along the corresponding beam line. The new CT images were input into the Eclipse treatment planning system. The original plan was calculated to the combined CT image series without the plan compensator. The newly computed patient 3D dose from the combined patientcompensator images was verified against the original plan dose. Test plans include the compensators with defects intentionally created inside the fabricated compensators. Results: The calculated 3D dose with the combined compensator and patient CT images reflects the impact of the fabricated compensator to the patient. For the test cases in which no defects were created, the dose distributions were in agreement between our method and the corresponding original plans. For the compensator with the defects, the purposely changed material and a purposely created internal defect were successfully detected while not possible with just the traditional compensator profiles detection methods. Conclusion: We present here a 3D dose verification process to qualify the fabricated proton double-scattering compensator. Such compensator detection process assesses the patient 3D impact of the fabricated compensator surface profile as well as the compensator internal material and structure changes. This research receives funding support from CURA Medical Technologies.« less
NASA Astrophysics Data System (ADS)
Watanabe, Ryusuke; Muramatsu, Chisako; Ishida, Kyoko; Sawada, Akira; Hatanaka, Yuji; Yamamoto, Tetsuya; Fujita, Hiroshi
2017-03-01
Early detection of glaucoma is important to slow down progression of the disease and to prevent total vision loss. We have been studying an automated scheme for detection of a retinal nerve fiber layer defect (NFLD), which is one of the earliest signs of glaucoma on retinal fundus images. In our previous study, we proposed a multi-step detection scheme which consists of Gabor filtering, clustering and adaptive thresholding. The problems of the previous method were that the number of false positives (FPs) was still large and that the method included too many rules. In attempt to solve these problems, we investigated the end-to-end learning system without pre-specified features. A deep convolutional neural network (DCNN) with deconvolutional layers was trained to detect NFLD regions. In this preliminary investigation, we investigated effective ways of preparing the input images and compared the detection results. The optimal result was then compared with the result obtained by the previous method. DCNN training was carried out using original images of abnormal cases, original images of both normal and abnormal cases, ellipse-based polar transformed images, and transformed half images. The result showed that use of both normal and abnormal cases increased the sensitivity as well as the number of FPs. Although NFLDs are visualized with the highest contrast in green plane, the use of color images provided higher sensitivity than the use of green image only. The free response receiver operating characteristic curve using the transformed color images, which was the best among seven different sets studied, was comparable to that of the previous method. Use of DCNN has a potential to improve the generalizability of automated detection method of NFLDs and may be useful in assisting glaucoma diagnosis on retinal fundus images.
A general prediction model for the detection of ADHD and Autism using structural and functional MRI.
Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G
2018-01-01
This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
A summary of image segmentation techniques
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly
1993-01-01
Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.
Experimental Optoelectronic Associative Memory
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin
1992-01-01
Optoelectronic associative memory responds to input image by displaying one of M remembered images. Which image to display determined by optoelectronic analog computation of resemblance between input image and each remembered image. Does not rely on precomputation and storage of outer-product synapse matrix. Size of memory needed to store and process images reduced.
Automatic image equalization and contrast enhancement using Gaussian mixture modeling.
Celik, Turgay; Tjahjadi, Tardi
2012-01-01
In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.
Image based SAR product simulation for analysis
NASA Technical Reports Server (NTRS)
Domik, G.; Leberl, F.
1987-01-01
SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.
Ultrasound breast imaging using frequency domain reverse time migration
NASA Astrophysics Data System (ADS)
Roy, O.; Zuberi, M. A. H.; Pratt, R. G.; Duric, N.
2016-04-01
Conventional ultrasonography reconstruction techniques, such as B-mode, are based on a simple wave propagation model derived from a high frequency approximation. Therefore, to minimize model mismatch, the central frequency of the input pulse is typically chosen between 3 and 15 megahertz. Despite the increase in theoretical resolution, operating at higher frequencies comes at the cost of lower signal-to-noise ratio. This ultimately degrades the image contrast and overall quality at higher imaging depths. To address this issue, we investigate a reflection imaging technique, known as reverse time migration, which uses a more accurate propagation model for reconstruction. We present preliminary simulation results as well as physical phantom image reconstructions obtained using data acquired with a breast imaging ultrasound tomography prototype. The original reconstructions are filtered to remove low-wavenumber artifacts that arise due to the inclusion of the direct arrivals. We demonstrate the advantage of using an accurate sound speed model in the reverse time migration process. We also explain how the increase in computational complexity can be mitigated using a frequency domain approach and a parallel computing platform.
Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.
Film grain synthesis and its application to re-graining
NASA Astrophysics Data System (ADS)
Schallauer, Peter; Mörzinger, Roland
2006-01-01
Digital film restoration and special effects compositing require more and more automatic procedures for movie regraining. Missing or inhomogeneous grain decreases perceived quality. For the purpose of grain synthesis an existing texture synthesis algorithm has been evaluated and optimized. We show that this algorithm can produce synthetic grain which is perceptually similar to a given grain template, which has high spatial and temporal variation and which can be applied to multi-spectral images. Furthermore a re-grain application framework is proposed, which synthesises based on an input grain template artificial grain and composites this together with the original image content. Due to its modular approach this framework supports manual as well as automatic re-graining applications. Two example applications are presented, one for re-graining an entire movie and one for fully automatic re-graining of image regions produced by restoration algorithms. Low computational cost of the proposed algorithms allows application in industrial grade software.
Optical to optical interface device
NASA Technical Reports Server (NTRS)
Oliver, D. S.; Vohl, P.; Nisenson, P.
1972-01-01
The development, fabrication, and testing of a preliminary model of an optical-to-optical (noncoherent-to-coherent) interface device for use in coherent optical parallel processing systems are described. The developed device demonstrates a capability for accepting as an input a scene illuminated by a noncoherent radiation source and providing as an output a coherent light beam spatially modulated to represent the original noncoherent scene. The converter device developed under this contract employs a Pockels readout optical modulator (PROM). This is a photosensitive electro-optic element which can sense and electrostatically store optical images. The stored images can be simultaneously or subsequently readout optically by utilizing the electrostatic storage pattern to control an electro-optic light modulating property of the PROM. The readout process is parallel as no scanning mechanism is required. The PROM provides the functions of optical image sensing, modulation, and storage in a single active material.
NASA Astrophysics Data System (ADS)
Kuwatani, T.; Toriumi, M.
2009-12-01
Recent advances in methodologies of geophysical observations, such as seismic tomography, seismic reflection method and geomagnetic method, provide us a large amount and a wide variety of data for physical properties of a crust and upper mantle (e.g. Matsubara et al. (2008)). However, it has still been difficult to specify a rock type and its physical conditions, mainly because (1) available data usually have a lot of error and uncertainty, and (2) physical properties of rocks are greatly affected by fluid and microstructures. The objective interpretation and quantitative evaluation for lithology and fluid-related structure require the statistical analyses of integrated geophysical and geological data. Self-Organizing Maps (SOMs) are unsupervised artificial neural networks that map the input space into clusters in a topological form whose organization is related to trends in the input data (Kohonen 2001). SOMs are powerful neural network techniques to classify and interpret multiattribute data sets. Results of SOM classifications can be represented as 2D images, called feature maps which illustrate the complexity and interrelationships among input data sets. Recently, some works have used SOM in order to interpret multidimensional, non-linear, and highly noised geophysical data for purposes of geological prediction (e.g. Klose 2006; Tselentis et al. 2007; Bauer et al. 2008). This paper describes the application of SOM to the 3D velocity structure beneath the whole Japan islands (e.g. Matsubara et al. 2008). From the obtained feature maps, we can specify the lithology and qualitatively evaluate the effect of fluid-related structures. Moreover, re-projection of feature maps onto the 3D velocity structures resulted in detailed images of the structures within the plates. The Pacific plate and the Philippine Sea plate subducting beneath the Eurasian plate can be imaged more clearly than the original P- and S-wave velocity structures. In order to understand more precise prediction of lithology and its structure, we will use the additional input data sets, such as tomographic images of random velocity fluctuation (Takahashi et al. 2009) and b-value mapping data. Additionally, different kinds of data sets, including the experimental and petrological results (e.g. Christensen 1991; Hacker et al. 2003) can be applied to our analyses.
Applicability of common measures in multifocus image fusion comparison
NASA Astrophysics Data System (ADS)
Vajgl, Marek
2017-11-01
Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.
Face recognition: a convolutional neural-network approach.
Lawrence, S; Giles, C L; Tsoi, A C; Back, A D
1997-01-01
We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.
A Baseline-Free Defect Imaging Technique in Plates Using Time Reversal of Lamb Waves
NASA Astrophysics Data System (ADS)
Hyunjo, Jeong; Sungjong, Cho; Wei, Wei
2011-06-01
We present an analytical investigation for a baseline-free imaging of a defect in plate-like structures using the time-reversal of Lamb waves. We first consider the flexural wave (A0 mode) propagation in a plate containing a defect, and reception and time reversal process of the output signal at the receiver. The received output signal is then composed of two parts: a directly propagated wave and a scattered wave from the defect. The time reversal of these waves recovers the original input signal, and produces two additional sidebands that contain the time-of-flight information on the defect location. One of the side-band signals is then extracted as a pure defect signal. A defect localization image is then constructed from a beamforming technique based on the time-frequency analysis of the side band signal for each transducer pair in a network of sensors. The simulation results show that the proposed scheme enables the accurate, baseline-free imaging of a defect.
Optoelectronic associative memory
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor)
1993-01-01
An associative optical memory including an input spatial light modulator (SLM) in the form of an edge enhanced liquid crystal light valve (LCLV) and a pair of memory SLM's in the form of liquid crystal televisions (LCTV's) forms a matrix array of an input image which is cross correlated with a matrix array of stored images. The correlation product is detected and nonlinearly amplified to illuminate a replica of the stored image array to select the stored image correlating with the input image. The LCLV is edge enhanced by reducing the bias frequency and voltage and rotating its orientation. The edge enhancement and nonlinearity of the photodetection improves the orthogonality of the stored image. The illumination of the replicate stored image provides a clean stored image, uncontaminated by the image comparison process.
Detecting Waste Tire Sites Using Satellite Imagery
NASA Astrophysics Data System (ADS)
Quinlan, B.; Huybrechts, C.; Schmidt, C.; Skiles, J. W.
2005-12-01
Waste tire piles pose environmental threats in the form of toxic fires and potential insect habitat. Previous techniques used to locate tire piles have included California Highway Patrol aerial surveillance and location tips from stakeholders. The TIRe (Tire Identification from Reflectance) model was developed as part of a pilot-project funded by the California Integrated Waste Management Board (CIWMB), a division of the California Environmental Protection Agency, and executed at NASA Ames Research Center's DEVELOP Program during the summer of 2005. The goal of the pilot-project was to determine if high-resolution satellite imagery could be used to locate waste tire disposal sites. The TIRe model, built in Leica Geosystems' ERDAS Imagine Model Builder, was created to automate the process of isolating tires in satellite imagery in two land cover types found in California. The sole geospatial data input to the TIRe model was Space Imaging IKONOS imagery. Once the imagery was processed through the TIRe model, less than 1% of the original image remained, consisting only of dark pixels containing tires or spectrally similar features. The output, a binary image was overlain on top of the original image for visual interpretation. The TIRe model was successfully able to identify waste tire piles as small as 400 tires and will prove to be a valuable tool for the detection, monitoring and remediation of waste tire sites.
[Parietal Cortices and Body Information].
Naito, Eiichi; Amemiya, Kaoru; Morita, Tomoyo
2016-11-01
Proprioceptive signals originating from skeletal muscles and joints contribute to the formation of both the human body schema and the body image. In this chapter, we introduce various types of bodily illusions that are elicited by proprioceptive inputs, and we discuss distinct functions implemented by different parietal cortices. First, we illustrate the primary importance of the motor network in the processing of proprioceptive (kinesthetic) signals originating from muscle spindles. Next, we argue that the right inferior parietal cortex, in concert with the inferior frontal cortex (both regions connected by the inferior branch of the superior longitudinal fasciculus-SLF III), may be involved in the conscious experience of body image. Further, we hypothesize other functions of distinct parietal regions: the association between internal hand motor representation with external object representation in the left inferior parietal cortex, visuo-kinesthetic processing in the bilateral posterior parietal cortices, and the integration of somatic signals from different body parts in the higher-order somatosensory parietal cortices. Our results indicate that a distinct parietal region, in concert with its anatomically and functionally connected frontal regions, probably plays specialized roles in the processing of body-related information.
Overview of the land analysis system (LAS)
Quirk, Bruce K.; Olseson, Lyndon R.
1987-01-01
The Land Analysis System (LAS) is a fully integrated digital analysis system designed to support remote sensing, image processing, and geographic information systems research. LAS is being developed through a cooperative effort between the National Aeronautics and Space Administration Goddard Space Flight Center and the U. S. Geological Survey Earth Resources Observation Systems (EROS) Data Center. LAS has over 275 analysis modules capable to performing input and output, radiometric correction, geometric registration, signal processing, logical operations, data transformation, classification, spatial analysis, nominal filtering, conversion between raster and vector data types, and display manipulation of image and ancillary data. LAS is currently implant using the Transportable Applications Executive (TAE). While TAE was designed primarily to be transportable, it still provides the necessary components for a standard user interface, terminal handling, input and output services, display management, and intersystem communications. With TAE the analyst uses the same interface to the processing modules regardless of the host computer or operating system. LAS was originally implemented at EROS on a Digital Equipment Corporation computer system under the Virtual Memorial System operating system with DeAnza displays and is presently being converted to run on a Gould Power Node and Sun workstation under the Berkeley System Distribution UNIX operating system.
Classification of urine sediment based on convolution neural network
NASA Astrophysics Data System (ADS)
Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian
2018-04-01
By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.
Proton probing of a relativistic laser interaction with near-critical plasma
NASA Astrophysics Data System (ADS)
Willingale, Louise; Zulick, C.; Thomas, A. G. R.; Maksimchuk, A.; Krushelnick, K.; Nilson, P. M.; Stoeckl, C.; Sangster, T. C.; Nazarov, W.
2014-10-01
The Omega EP laser (1000 J in 10 ps pulses) was used to investigate a relativistic intensity laser interaction with near-critical density plasma using a transverse proton beam to diagnose the large electromagnetic fields generated. A very low density foam target mounted in a washer provided the near-critical density conditions. The fields from a scaled, two-dimensional particle-in-cell simulation were inputed into a particle-tracking code to create simulated proton probe images. This allows us to understand the origins of the complex features in the experimental images, including a rapidly expanding sheath field, evidence for ponderomotive channeling and fields at the foam-washer interface. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0002028.
Personal identification based on blood vessels of retinal fundus images
NASA Astrophysics Data System (ADS)
Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi
2008-03-01
Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.
Du, Yuncheng; Budman, Hector M; Duever, Thomas A
2016-06-01
Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.
Single-Frame Terrain Mapping Software for Robotic Vehicles
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.
2011-01-01
This software is a component in an unmanned ground vehicle (UGV) perception system that builds compact, single-frame terrain maps for distribution to other systems, such as a world model or an operator control unit, over a local area network (LAN). Each cell in the map encodes an elevation value, terrain classification, object classification, terrain traversability, terrain roughness, and a confidence value into four bytes of memory. The input to this software component is a range image (from a lidar or stereo vision system), and optionally a terrain classification image and an object classification image, both registered to the range image. The single-frame terrain map generates estimates of the support surface elevation, ground cover elevation, and minimum canopy elevation; generates terrain traversability cost; detects low overhangs and high-density obstacles; and can perform geometry-based terrain classification (ground, ground cover, unknown). A new origin is automatically selected for each single-frame terrain map in global coordinates such that it coincides with the corner of a world map cell. That way, single-frame terrain maps correctly line up with the world map, facilitating the merging of map data into the world map. Instead of using 32 bits to store the floating-point elevation for a map cell, the vehicle elevation is assigned to the map origin elevation and reports the change in elevation (from the origin elevation) in terms of the number of discrete steps. The single-frame terrain map elevation resolution is 2 cm. At that resolution, terrain elevation from 20.5 to 20.5 m (with respect to the vehicle's elevation) is encoded into 11 bits. For each four-byte map cell, bits are assigned to encode elevation, terrain roughness, terrain classification, object classification, terrain traversability cost, and a confidence value. The vehicle s current position and orientation, the map origin, and the map cell resolution are all included in a header for each map. The map is compressed into a vector prior to delivery to another system.
Phototaxis and the origin of visual eyes
Randel, Nadine
2016-01-01
Vision allows animals to detect spatial differences in environmental light levels. High-resolution image-forming eyes evolved from low-resolution eyes via increases in photoreceptor cell number, improvements in optics and changes in the neural circuits that process spatially resolved photoreceptor input. However, the evolutionary origins of the first low-resolution visual systems have been unclear. We propose that the lowest resolving (two-pixel) visual systems could initially have functioned in visual phototaxis. During visual phototaxis, such elementary visual systems compare light on either side of the body to regulate phototactic turns. Another, even simpler and non-visual strategy is characteristic of helical phototaxis, mediated by sensory–motor eyespots. The recent mapping of the complete neural circuitry (connectome) of an elementary visual system in the larva of the annelid Platynereis dumerilii sheds new light on the possible paths from non-visual to visual phototaxis and to image-forming vision. We outline an evolutionary scenario focusing on the neuronal circuitry to account for these transitions. We also present a comprehensive review of the structure of phototactic eyes in invertebrate larvae and assign them to the non-visual and visual categories. We propose that non-visual systems may have preceded visual phototactic systems in evolution that in turn may have repeatedly served as intermediates during the evolution of image-forming eyes. PMID:26598725
Content-based retrieval of historical Ottoman documents stored as textual images.
Saykol, Ediz; Sinop, Ali Kemal; Güdükbay, Ugur; Ulusoy, Ozgür; Cetin, A Enis
2004-03-01
There is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. In this paper, a framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domain based on angular and distance span of shapes are used to extract the symbols. In order to make content-based retrieval in historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in textual images. The querying process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.
Image processing tool for automatic feature recognition and quantification
Chen, Xing; Stoddard, Ryan J.
2017-05-02
A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.
Origins Space Telescope: Telescope Design and Instrument Specifications
NASA Astrophysics Data System (ADS)
Meixner, Margaret; Carter, Ruth; Leisawitz, David; Dipirro, Mike; Flores, Anel; Staguhn, Johannes; Kellog, James; Roellig, Thomas L.; Melnick, Gary J.; Bradford, Charles; Wright, Edward L.; Zmuidzinas, Jonas; Origins Space Telescope Study Team
2017-01-01
The Origins Space Telescope (OST) is the mission concept for the Far-Infrared Surveyor, one of the four science and technology definition studies of NASA Headquarters for the 2020 Astronomy and Astrophysics Decadal survey. The renaming of the mission reflects Origins science goals that will discover and characterize the most distant galaxies, nearby galaxies and the Milky Way, exoplanets, and the outer reaches of our Solar system. This poster will show the preliminary telescope design that will be a large aperture (>8 m in diameter), cryogenically cooled telescope. We will also present the specifications for the spectrographs and imagers over a potential wavelength range of ~10 microns to 1 millimeter. We look forward to community input into this mission definition over the coming year as we work on the concept design for the mission. Origins will enable flagship-quality general observing programs led by the astronomical community in the 2030s. We welcome you to contact the Science and Technology Definition Team (STDT) with your science needs and ideas by emailing us at firsurveyor_info@lists.ipac.caltech.edu.
The origins of thalamic inputs to grasp zones in frontal cortex of macaque monkeys
Stepniewska, Iwona; Kaas, Jon H.
2015-01-01
The hand representation in primary motor cortex (M1) is instrumental to manual dexterity in primates. In Old World monkeys, rostral and caudal aspects of the hand representation are located in the precentral gyrus and the anterior bank of the central sulcus, respectively. We previously reported the organization of the cortico-cortical connections of the grasp zone in rostral M1. Here we describe the organization of thalamocortical connections that were labeled from the same tracer injections. Thalamocortical connections of a grasp zone in ventral premotor cortex (PMv) and the M1 orofacial representation are included for direct comparison. The M1 grasp zone was primarily connected with ventral lateral divisions of motor thalamus. The largest proportion of inputs originated in the posterior division (VLp) followed by the medial and the anterior divisions. Thalamic inputs to the M1 grasp zone originated in more lateral aspects of VLp as compared to the origins of thalamic inputs to the M1 orofacial representation. Inputs to M1 from thalamic divisions connected with cerebellum constituted three fold the density of inputs from divisions connected with basal ganglia, whereas the ratio of inputs was more balanced for the grasp zone in PMv. Privileged access of the cerebellothalamic pathway to the grasp zone in rostral M1 is consistent with the connection patterns previously reported for the precentral gyrus. Thus, cerebellar nuclei are likely more involved than basal ganglia nuclei with the contributions of rostral M1 to manual dexterity. PMID:26254903
The origins of thalamic inputs to grasp zones in frontal cortex of macaque monkeys.
Gharbawie, Omar A; Stepniewska, Iwona; Kaas, Jon H
2016-07-01
The hand representation in primary motor cortex (M1) is instrumental to manual dexterity in primates. In Old World monkeys, rostral and caudal aspects of the hand representation are located in the precentral gyrus and the anterior bank of the central sulcus, respectively. We previously reported the organization of the cortico-cortical connections of the grasp zone in rostral M1. Here we describe the organization of thalamocortical connections that were labeled from the same tracer injections. Thalamocortical connections of a grasp zone in ventral premotor cortex (PMv) and the M1 orofacial representation are included for direct comparison. The M1 grasp zone was primarily connected with ventral lateral divisions of motor thalamus. The largest proportion of inputs originated in the posterior division (VLp) followed by the medial and the anterior divisions. Thalamic inputs to the M1 grasp zone originated in more lateral aspects of VLp as compared to the origins of thalamic inputs to the M1 orofacial representation. Inputs to M1 from thalamic divisions connected with cerebellum constituted three fold the density of inputs from divisions connected with basal ganglia, whereas the ratio of inputs was more balanced for the grasp zone in PMv. Privileged access of the cerebellothalamic pathway to the grasp zone in rostral M1 is consistent with the connection patterns previously reported for the precentral gyrus. Thus, cerebellar nuclei are likely more involved than basal ganglia nuclei with the contributions of rostral M1 to manual dexterity.
NASA Astrophysics Data System (ADS)
Mita, Akifumi; Okamoto, Atsushi; Funakoshi, Hisatoshi
2004-06-01
We have proposed an all-optical authentic memory with the two-wave encryption method. In the recording process, the image data are encrypted to a white noise by the random phase masks added on the input beam with the image data and the reference beam. Only reading beam with the phase-conjugated distribution of the reference beam can decrypt the encrypted data. If the encrypted data are read out with an incorrect phase distribution, the output data are transformed into a white noise. Moreover, during read out, reconstructions of the encrypted data interfere destructively resulting in zero intensity. Therefore our memory has a merit that we can detect unlawful accesses easily by measuring the output beam intensity. In our encryption method, the random phase mask on the input plane plays important roles in transforming the input image into a white noise and prohibiting to decrypt a white noise to the input image by the blind deconvolution method. Without this mask, when unauthorized users observe the output beam by using CCD in the readout with the plane wave, the completely same intensity distribution as that of Fourier transform of the input image is obtained. Therefore the encrypted image will be decrypted easily by using the blind deconvolution method. However in using this mask, even if unauthorized users observe the output beam using the same method, the encrypted image cannot be decrypted because the observed intensity distribution is dispersed at random by this mask. Thus it can be said the robustness is increased by this mask. In this report, we compare two correlation coefficients, which represents the degree of a white noise of the output image, between the output image and the input image in using this mask or not. We show that the robustness of this encryption method is increased as the correlation coefficient is improved from 0.3 to 0.1 by using this mask.
Lateral Spread of Orientation Selectivity in V1 is Controlled by Intracortical Cooperativity
Chavane, Frédéric; Sharon, Dahlia; Jancke, Dirk; Marre, Olivier; Frégnac, Yves; Grinvald, Amiram
2011-01-01
Neurons in the primary visual cortex receive subliminal information originating from the periphery of their receptive fields (RF) through a variety of cortical connections. In the cat primary visual cortex, long-range horizontal axons have been reported to preferentially bind to distant columns of similar orientation preferences, whereas feedback connections from higher visual areas provide a more diverse functional input. To understand the role of these lateral interactions, it is crucial to characterize their effective functional connectivity and tuning properties. However, the overall functional impact of cortical lateral connections, whatever their anatomical origin, is unknown since it has never been directly characterized. Using direct measurements of postsynaptic integration in cat areas 17 and 18, we performed multi-scale assessments of the functional impact of visually driven lateral networks. Voltage-sensitive dye imaging showed that local oriented stimuli evoke an orientation-selective activity that remains confined to the cortical feedforward imprint of the stimulus. Beyond a distance of one hypercolumn, the lateral spread of cortical activity gradually lost its orientation preference approximated as an exponential with a space constant of about 1 mm. Intracellular recordings showed that this loss of orientation selectivity arises from the diversity of converging synaptic input patterns originating from outside the classical RF. In contrast, when the stimulus size was increased, we observed orientation-selective spread of activation beyond the feedforward imprint. We conclude that stimulus-induced cooperativity enhances the long-range orientation-selective spread. PMID:21629708
Estimating evaporation with thermal UAV data and two-source energy balance models
NASA Astrophysics Data System (ADS)
Hoffmann, H.; Nieto, H.; Jensen, R.; Guzinski, R.; Zarco-Tejada, P.; Friborg, T.
2016-02-01
Estimating evaporation is important when managing water resources and cultivating crops. Evaporation can be estimated using land surface heat flux models and remotely sensed land surface temperatures (LST), which have recently become obtainable in very high resolution using lightweight thermal cameras and Unmanned Aerial Vehicles (UAVs). In this study a thermal camera was mounted on a UAV and applied into the field of heat fluxes and hydrology by concatenating thermal images into mosaics of LST and using these as input for the two-source energy balance (TSEB) modelling scheme. Thermal images are obtained with a fixed-wing UAV overflying a barley field in western Denmark during the growing season of 2014 and a spatial resolution of 0.20 m is obtained in final LST mosaics. Two models are used: the original TSEB model (TSEB-PT) and a dual-temperature-difference (DTD) model. In contrast to the TSEB-PT model, the DTD model accounts for the bias that is likely present in remotely sensed LST. TSEB-PT and DTD have already been well tested, however only during sunny weather conditions and with satellite images serving as thermal input. The aim of this study is to assess whether a lightweight thermal camera mounted on a UAV is able to provide data of sufficient quality to constitute as model input and thus attain accurate and high spatial and temporal resolution surface energy heat fluxes, with special focus on latent heat flux (evaporation). Furthermore, this study evaluates the performance of the TSEB scheme during cloudy and overcast weather conditions, which is feasible due to the low data retrieval altitude (due to low UAV flying altitude) compared to satellite thermal data that are only available during clear-sky conditions. TSEB-PT and DTD fluxes are compared and validated against eddy covariance measurements and the comparison shows that both TSEB-PT and DTD simulations are in good agreement with eddy covariance measurements, with DTD obtaining the best results. The DTD model provides results comparable to studies estimating evaporation with similar experimental setups, but with LST retrieved from satellites instead of a UAV. Further, systematic irrigation patterns on the barley field provide confidence in the veracity of the spatially distributed evaporation revealed by model output maps. Lastly, this study outlines and discusses the thermal UAV image processing that results in mosaics suited for model input. This study shows that the UAV platform and the lightweight thermal camera provide high spatial and temporal resolution data valid for model input and for other potential applications requiring high-resolution and consistent LST.
Aryanto, Kadek Y E; Broekema, André; Oudkerk, Matthijs; van Ooijen, Peter M A
2012-01-01
To present an adapted Clinical Trial Processor (CTP) test set-up for receiving, anonymising and saving Digital Imaging and Communications in Medicine (DICOM) data using external input from the original database of an existing clinical study information system to guide the anonymisation process. Two methods are presented for an adapted CTP test set-up. In the first method, images are pushed from the Picture Archiving and Communication System (PACS) using the DICOM protocol through a local network. In the second method, images are transferred through the internet using the HTTPS protocol. In total 25,000 images from 50 patients were moved from the PACS, anonymised and stored within roughly 2 h using the first method. In the second method, an average of 10 images per minute were transferred and processed over a residential connection. In both methods, no duplicated images were stored when previous images were retransferred. The anonymised images are stored in appropriate directories. The CTP can transfer and process DICOM images correctly in a very easy set-up providing a fast, secure and stable environment. The adapted CTP allows easy integration into an environment in which patient data are already included in an existing information system.
Convolution Operation of Optical Information via Quantum Storage
NASA Astrophysics Data System (ADS)
Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan
2017-06-01
We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.
Online image classification under monotonic decision boundary constraint
NASA Astrophysics Data System (ADS)
Lu, Cheng; Allebach, Jan; Wagner, Jerry; Pitta, Brandi; Larson, David; Guo, Yandong
2015-01-01
Image classification is a prerequisite for copy quality enhancement in all-in-one (AIO) device that comprises a printer and scanner, and which can be used to scan, copy and print. Different processing pipelines are provided in an AIO printer. Each of the processing pipelines is designed specifically for one type of input image to achieve the optimal output image quality. A typical approach to this problem is to apply Support Vector Machine to classify the input image and feed it to its corresponding processing pipeline. The online training SVM can help users to improve the performance of classification as input images accumulate. At the same time, we want to make quick decision on the input image to speed up the classification which means sometimes the AIO device does not need to scan the entire image to make a final decision. These two constraints, online SVM and quick decision, raise questions regarding: 1) what features are suitable for classification; 2) how we should control the decision boundary in online SVM training. This paper will discuss the compatibility of online SVM and quick decision capability.
The effect of input data transformations on object-based image analysis
LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.
2011-01-01
The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829
NASA Astrophysics Data System (ADS)
Eppenhof, Koen A. J.; Pluim, Josien P. W.
2017-02-01
Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.
Automatic archaeological feature extraction from satellite VHR images
NASA Astrophysics Data System (ADS)
Jahjah, Munzer; Ulivieri, Carlo
2010-05-01
Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were applied to different archaeological sites in Turkmenistan (Nisa) and in Iraq (Babylon); a further change detection analysis was applied to the Babylon site using two HR images as a pre-post second gulf war. We had different results or outputs, taking into consideration the fact that the operative scale of sensed data determines the final result of the elaboration and the output of the information quality, because each of them was sensitive to specific shapes in each input image, we had mapped linear and nonlinear objects, updating archaeological cartography, automatic change detection analysis for the Babylon site. The discussion of these techniques has the objective to provide the archaeological team with new instruments for the orientation and the planning of a remote sensing application.
Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro
2018-05-01
CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .
Optimal Filter Estimation for Lucas-Kanade Optical Flow
Sharmin, Nusrat; Brad, Remus
2012-01-01
Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.
Quantitative assessment of multiple sclerosis lesion load using CAD and expert input
NASA Astrophysics Data System (ADS)
Gertych, Arkadiusz; Wong, Alexis; Sangnil, Alan; Liu, Brent J.
2008-03-01
Multiple sclerosis (MS) is a frequently encountered neurological disease with a progressive but variable course affecting the central nervous system. Outline-based lesion quantification in the assessment of lesion load (LL) performed on magnetic resonance (MR) images is clinically useful and provides information about the development and change reflecting overall disease burden. Methods of LL assessment that rely on human input are tedious, have higher intra- and inter-observer variability and are more time-consuming than computerized automatic (CAD) techniques. At present it seems that methods based on human lesion identification preceded by non-interactive outlining by CAD are the best LL quantification strategies. We have developed a CAD that automatically quantifies MS lesions, displays 3-D lesion map and appends radiological findings to original images according to current DICOM standard. CAD is also capable to display and track changes and make comparison between patient's separate MRI studies to determine disease progression. The findings are exported to a separate imaging tool for review and final approval by expert. Capturing and standardized archiving of manual contours is also implemented. Similarity coefficients calculated from quantities of LL in collected exams show a good correlation of CAD-derived results vs. those incorporated as expert's reading. Combining the CAD approach with an expert interaction may impact to the diagnostic work-up of MS patients because of improved reproducibility in LL assessment and reduced time for single MR or comparative exams reading. Inclusion of CAD-generated outlines as DICOM-compliant overlays into the image data can serve as a better reference in MS progression tracking.
Vector generator scan converter
Moore, James M.; Leighton, James F.
1990-01-01
High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardward for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.
Using false colors to protect visual privacy of sensitive content
NASA Astrophysics Data System (ADS)
Ćiftçi, Serdar; Korshunov, Pavel; Akyüz, Ahmet O.; Ebrahimi, Touradj
2015-03-01
Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly
NASA Astrophysics Data System (ADS)
Munshi, Soumika; Datta, A. K.
2003-03-01
A technique of optically detecting the edge and skeleton of an image by defining shift operations for morphological transformation is described. A (2 × 2) source array, which acts as the structuring element of morphological operations, casts four angularly shifted optical projections of the input image. The resulting dilated image, when superimposed with the complementary input image, produces the edge image. For skeletonization, the source array casts four partially overlapped output images of the inverted input image, which is negated, and the resultant image is recorded in a CCD camera. This overlapped eroded image is again eroded and then dilated, producing an opened image. The difference between the eroded and opened image is then computed, resulting in a thinner image. This procedure of obtaining a thinned image is iterated until the difference image becomes zero, maintaining the connectivity conditions. The technique has been optically implemented using a single spatial modulator and has the advantage of single-instruction parallel processing of the image. The techniques have been tested both for binary and grey images.
Halliwell, Emily R; Jones, Linor L; Fraser, Matthew; Lockley, Morag; Hill-Feltham, Penelope; McKay, Colette M
2015-06-01
A study was conducted to determine whether modifications to input compression and input frequency response characteristics can improve music-listening satisfaction in cochlear implant users. Experiment 1 compared three pre-processed versions of music and speech stimuli in a laboratory setting: original, compressed, and flattened frequency response. Music excerpts comprised three music genres (classical, country, and jazz), and a running speech excerpt was compared. Experiment 2 implemented a flattened input frequency response in the speech processor program. In a take-home trial, participants compared unaltered and flattened frequency responses. Ten and twelve adult Nucleus Freedom cochlear implant users participated in Experiments 1 and 2, respectively. Experiment 1 revealed a significant preference for music stimuli with a flattened frequency response compared to both original and compressed stimuli, whereas there was a significant preference for the original (rising) frequency response for speech stimuli. Experiment 2 revealed no significant mean preference for the flattened frequency response, with 9 of 11 subjects preferring the rising frequency response. Input compression did not alter music enjoyment. Comparison of the two experiments indicated that individual frequency response preferences may depend on the genre or familiarity, and particularly whether the music contained lyrics.
Zhu, Ying
2016-01-01
Individual neurons in several sensory systems receive synaptic inputs organized according to subcellular topographic maps, yet the fine structure of this topographic organization and its relation to dendritic morphology have not been studied in detail. Subcellular topography is expected to play a role in dendritic integration, particularly when dendrites are extended and active. The lobula giant movement detector (LGMD) neuron in the locust visual system is known to receive topographic excitatory inputs on part of its dendritic tree. The LGMD responds preferentially to objects approaching on a collision course and is thought to implement several interesting dendritic computations. To study the fine retinotopic mapping of visual inputs onto the excitatory dendrites of the LGMD, we designed a custom microscope allowing visual stimulation at the native sampling resolution of the locust compound eye while simultaneously performing two-photon calcium imaging on excitatory dendrites. We show that the LGMD receives a distributed, fine retinotopic projection from the eye facets and that adjacent facets activate overlapping portions of the same dendritic branches. We also demonstrate that adjacent retinal inputs most likely make independent synapses on the excitatory dendrites of the LGMD. Finally, we show that the fine topographic mapping can be studied using dynamic visual stimuli. Our results reveal the detailed structure of the dendritic input originating from individual facets on the eye and their relation to that of adjacent facets. The mapping of visual space onto the LGMD's dendrites is expected to have implications for dendritic computation. PMID:27009157
Wang, Quanxin; Burkhalter, Andreas
2013-01-23
Previous studies of intracortical connections in mouse visual cortex have revealed two subnetworks that resemble the dorsal and ventral streams in primates. Although calcium imaging studies have shown that many areas of the ventral stream have high spatial acuity whereas areas of the dorsal stream are highly sensitive for transient visual stimuli, there are some functional inconsistencies that challenge a simple grouping into "what/perception" and "where/action" streams known in primates. The superior colliculus (SC) is a major center for processing of multimodal sensory information and the motor control of orienting the eyes, head, and body. Visual processing is performed in superficial layers, whereas premotor activity is generated in deep layers of the SC. Because the SC is known to receive input from visual cortex, we asked whether the projections from 10 visual areas of the dorsal and ventral streams terminate in differential depth profiles within the SC. We found that inputs from primary visual cortex are by far the strongest. Projections from the ventral stream were substantially weaker, whereas the sparsest input originated from areas of the dorsal stream. Importantly, we found that ventral stream inputs terminated in superficial layers, whereas dorsal stream inputs tended to be patchy and either projected equally to superficial and deep layers or strongly preferred deep layers. The results suggest that the anatomically defined ventral and dorsal streams contain areas that belong to distinct functional systems, specialized for the processing of visual information and visually guided action, respectively.
Selection of embryogenic sugarcane callus by image analysis.
Honda, H; Ito, T; Yamada, J; Hanai, T; Matsuoka, M; Kobayashi, T
1999-01-01
In the cultivation of plant calli on solid media, two kinds of calli such as compact and friable calli, which are a bright yellow and a whitish clump, respectively, are often obtained. Distinction of these calli is of much importance in the regeneration step. The image analysis system associated with a Charge Coupled Device (CCD) camera and microscopy were used to distinguish sugarcane calli. The original images from compact and friable calli were input to a computer via an image analysis board. At first, the brightnesses of trichromatic colors, red (R), green (G) and blue (B), of each pixels were extracted and the average brightness value for each color was calculated. From these values of the trichromatic colors, compact and friable calli could not be clearly distinguished. Next, the brightness of yellow, Br(Y), and white, Br(W), were defined using Br(R), Br(G) and Br(B), and the difference between Br(Y) and Br(W), Br(Y-W), which can be used to express the yellowish grade, was calculated. When Br(Y-W) was determined from all pixels of the original images of both calli, the compact calli were found to be clearly distinguished from the friable calli by the frequency distributions of Br(Y-W). Average brightness center value, Av(C(Y-W)), was calculated from the frequency distributions. It was found that the calli with less than 10 units of Av(C(Y-W)) was never regenerated and a proportional relationship between Av(C(Y-W)) and the regeneration frequency of the callus line was obtained.
Team Electronic Gameplay Combining Different Means of Control
NASA Technical Reports Server (NTRS)
Palsson, Olafur S. (Inventor); Pope, Alan T. (Inventor)
2014-01-01
Disclosed are methods and apparatuses provided for modifying the effect of an operator controlled input device on an interactive device to encourage the self-regulation of at least one physiological activity by a person different than the operator. The interactive device comprises a display area which depicts images and apparatus for receiving at least one input from the operator controlled input device to thus permit the operator to control and interact with at least some of the depicted images. One effect modification comprises measurement of the physiological activity of a person different from the operator, while modifying the ability of the operator to control and interact with at least some of the depicted images by modifying the input from the operator controlled input device in response to changes in the measured physiological signal.
Methods in quantitative image analysis.
Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M
1996-05-01
The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value histogram of an existing image (input image) into a new grey value histogram (output image) are most quickly handled by a look-up table (LUT). The histogram of an image can be influenced by gain, offset and gamma of the camera. Gain defines the voltage range, offset defines the reference voltage and gamma the slope of the regression line between the light intensity and the voltage of the camera. A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix. The distance between the pixels (original pixel and its neighbouring pixel) can influence the various parameters calculated from the co-occurrence matrix. The main goals of image enhancement are elimination of surface roughness in an image (smoothing), correction of defects (e.g. noise), extraction of edges, identification of points, strengthening texture elements and improving contrast. In enhancement, two types of operations can be distinguished: pixel-based (point operations) and neighbourhood-based (matrix operations). The most important pixel-based operations are linear stretching of grey values, application of pre-stored LUTs and histogram equalisation. The neighbourhood-based operations work with so-called filters. These are organising elements with an original or initial point in their centre. Filters can be used to accentuate or to suppress specific structures within the image. Filters can work either in the spatial or in the frequency domain. The method used for analysing alterations of grey value intensities in the frequency domain is the Hartley transform. Filter operations in the spatial domain can be based on averaging or ranking the grey values occurring in the organising element. The most important filters, which are usually applied, are the Gaussian filter and the Laplace filter (both averaging filters), and the median filter, the top hat filter and the range operator (all ranking filters). Segmentation of objects is traditionally based on threshold grey values. (AB
Polarization recovery through scattering media.
de Aguiar, Hilton B; Gigan, Sylvain; Brasselet, Sophie
2017-09-01
The control and use of light polarization in optical sciences and engineering are widespread. Despite remarkable developments in polarization-resolved imaging for life sciences, their transposition to strongly scattering media is currently not possible, because of the inherent depolarization effects arising from multiple scattering. We show an unprecedented phenomenon that opens new possibilities for polarization-resolved microscopy in strongly scattering media: polarization recovery via broadband wavefront shaping. We demonstrate focusing and recovery of the original injected polarization state without using any polarizing optics at the detection. To enable molecular-level structural imaging, an arbitrary rotation of the input polarization does not degrade the quality of the focus. We further exploit the robustness of polarization recovery for structural imaging of biological tissues through scattering media. We retrieve molecular-level organization information of collagen fibers by polarization-resolved second harmonic generation, a topic of wide interest for diagnosis in biomedical optics. Ultimately, the observation of this new phenomenon paves the way for extending current polarization-based methods to strongly scattering environments.
Land cover classification of Landsat 8 satellite data based on Fuzzy Logic approach
NASA Astrophysics Data System (ADS)
Taufik, Afirah; Sakinah Syed Ahmad, Sharifah
2016-06-01
The aim of this paper is to propose a method to classify the land covers of a satellite image based on fuzzy rule-based system approach. The study uses bands in Landsat 8 and other indices, such as Normalized Difference Water Index (NDWI), Normalized difference built-up index (NDBI) and Normalized Difference Vegetation Index (NDVI) as input for the fuzzy inference system. The selected three indices represent our main three classes called water, built- up land, and vegetation. The combination of the original multispectral bands and selected indices provide more information about the image. The parameter selection of fuzzy membership is performed by using a supervised method known as ANFIS (Adaptive neuro fuzzy inference system) training. The fuzzy system is tested for the classification on the land cover image that covers Klang Valley area. The results showed that the fuzzy system approach is effective and can be explored and implemented for other areas of Landsat data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée; McKay, Erin
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of amore » given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.« less
Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel
2015-12-01
The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit gate offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on gate to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user's imaging requirements and generates automatically command files used as input for gate. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant gate input files are generated for the virtual patient model and associated pharmacokinetics. Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body "step and shoot" acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.
Segmentation and learning in the quantitative analysis of microscopy images
NASA Astrophysics Data System (ADS)
Ruggiero, Christy; Ross, Amy; Porter, Reid
2015-02-01
In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.
A tool for NDVI time series extraction from wide-swath remotely sensed images
NASA Astrophysics Data System (ADS)
Li, Zhishan; Shi, Runhe; Zhou, Cong
2015-09-01
Normalized Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring the vegetation coverage in land surface. The time series features of NDVI are capable of reflecting dynamic changes of various ecosystems. Calculating NDVI via Moderate Resolution Imaging Spectrometer (MODIS) and other wide-swath remotely sensed images provides an important way to monitor the spatial and temporal characteristics of large-scale NDVI. However, difficulties are still existed for ecologists to extract such information correctly and efficiently because of the problems in several professional processes on the original remote sensing images including radiometric calibration, geometric correction, multiple data composition and curve smoothing. In this study, we developed an efficient and convenient online toolbox for non-remote sensing professionals who want to extract NDVI time series with a friendly graphic user interface. It is based on Java Web and Web GIS technically. Moreover, Struts, Spring and Hibernate frameworks (SSH) are integrated in the system for the purpose of easy maintenance and expansion. Latitude, longitude and time period are the key inputs that users need to provide, and the NDVI time series are calculated automatically.
System alignment using the Talbot effect
NASA Astrophysics Data System (ADS)
Chevallier, Raymond; Le Falher, Eric; Heggarty, Kevin
1990-08-01
The Talbot effect is utilized to correct an alignment problem related to a neural network used for image recognition, which required the alignment of a spatial light modulator (SLM) with the input module. A mathematical model which employs the Fresnel diffraction theory is presented to describe the method. The calculation of the diffracted amplitude describes the wavefront sphericity and the original object transmittance function in order to qualify the lateral shift of the Talbot image. Another explanation is set forth in terms of plane-wave illumination in the neural network. Using a Fourier series and by describing planes where all the harmonics are in phase, the reconstruction of Talbot images is explained. The alignment is effective when the lenslet array is aligned on the even Talbot images of the SLM pixels and the incident wave is a plane wave. The alignment is evaluated in terms of source and periodicity errors, tilt of the incident plane waves, and finite object dimensions. The effects of the error sources are concluded to be negligible, the lenslet array is shown to be successfully aligned with the SLM, and other alignment applications are shown to be possible.
Chen, S C; Shao, C L; Liang, C K; Lin, S W; Huang, T H; Hsieh, M C; Yang, C H; Luo, C H; Wuo, C M
2004-01-01
In this paper, we present a text input system for the seriously disabled by using lips image recognition based on LabVIEW. This system can be divided into the software subsystem and the hardware subsystem. In the software subsystem, we adopted the technique of image processing to recognize the status of mouth-opened or mouth-closed depending the relative distance between the upper lip and the lower lip. In the hardware subsystem, parallel port built in PC is used to transmit the recognized result of mouth status to the Morse-code text input system. Integrating the software subsystem with the hardware subsystem, we implement a text input system by using lips image recognition programmed in LabVIEW language. We hope the system can help the seriously disabled to communicate with normal people more easily.
Moving object localization using optical flow for pedestrian detection from a moving vehicle.
Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun
2014-01-01
This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.
Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej
2011-01-01
A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. PMID:21708116
HONTIOR - HIGHER-ORDER NEURAL NETWORK FOR TRANSFORMATION INVARIANT OBJECT RECOGNITION
NASA Technical Reports Server (NTRS)
Spirkovska, L.
1994-01-01
Neural networks have been applied in numerous fields, including transformation invariant object recognition, wherein an object is recognized despite changes in the object's position in the input field, size, or rotation. One of the more successful neural network methods used in invariant object recognition is the higher-order neural network (HONN) method. With a HONN, known relationships are exploited and the desired invariances are built directly into the architecture of the network, eliminating the need for the network to learn invariance to transformations. This results in a significant reduction in the training time required, since the network needs to be trained on only one view of each object, not on numerous transformed views. Moreover, one hundred percent accuracy is guaranteed for images characterized by the built-in distortions, providing noise is not introduced through pixelation. The program HONTIOR implements a third-order neural network having invariance to translation, scale, and in-plane rotation built directly into the architecture, Thus, for 2-D transformation invariance, the network needs only to be trained on just one view of each object. HONTIOR can also be used for 3-D transformation invariant object recognition by training the network only on a set of out-of-plane rotated views. Historically, the major drawback of HONNs has been that the size of the input field was limited to the memory required for the large number of interconnections in a fully connected network. HONTIOR solves this problem by coarse coding the input images (coding an image as a set of overlapping but offset coarser images). Using this scheme, large input fields (4096 x 4096 pixels) can easily be represented using very little virtual memory (30Mb). The HONTIOR distribution consists of three main programs. The first program contains the training and testing routines for a third-order neural network. The second program contains the same training and testing procedures as the first, but it also contains a number of functions to display and edit training and test images. Finally, the third program is an auxiliary program which calculates the included angles for a given input field size. HONTIOR is written in C language, and was originally developed for Sun3 and Sun4 series computers. Both graphic and command line versions of the program are provided. The command line version has been successfully compiled and executed both on computers running the UNIX operating system and on DEC VAX series computer running VMS. The graphic version requires the SunTools windowing environment, and therefore runs only on Sun series computers. The executable for the graphics version of HONTIOR requires 1Mb of RAM. The standard distribution medium for HONTIOR is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The package includes sample input and output data. HONTIOR was developed in 1991. Sun, Sun3 and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation.
Spectral interpolation - Zero fill or convolution. [image processing
NASA Technical Reports Server (NTRS)
Forman, M. L.
1977-01-01
Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.
Identification of serial number on bank card using recurrent neural network
NASA Astrophysics Data System (ADS)
Liu, Li; Huang, Linlin; Xue, Jian
2018-04-01
Identification of serial number on bank card has many applications. Due to the different number printing mode, complex background, distortion in shape, etc., it is quite challenging to achieve high identification accuracy. In this paper, we propose a method using Normalization-Cooperated Gradient Feature (NCGF) and Recurrent Neural Network (RNN) based on Long Short-Term Memory (LSTM) for serial number identification. The NCGF maps the gradient direction elements of original image to direction planes such that the RNN with direction planes as input can recognize numbers more accurately. Taking the advantages of NCGF and RNN, we get 90%digit string recognition accuracy.
Illuminant-adaptive color reproduction for mobile display
NASA Astrophysics Data System (ADS)
Kim, Jong-Man; Park, Kee-Hyon; Kwon, Oh-Seol; Cho, Yang-Ho; Ha, Yeong-Ho
2006-01-01
This paper proposes an illuminant-adaptive reproduction method using light adaptation and flare conditions for a mobile display. Mobile displays, such as PDAs and cellular phones, are viewed under various lighting conditions. In particular, images displayed in daylight are perceived as quite dark due to the light adaptation of the human visual system, as the luminance of a mobile display is considerably lower than that of an outdoor environment. In addition, flare phenomena decrease the color gamut of a mobile display by increasing the luminance of dark areas and de-saturating the chroma. Therefore, this paper presents an enhancement method composed of lightness enhancement and chroma compensation. First, the ambient light intensity is measured using a lux-sensor, then the flare is calculated based on the reflection ratio of the display device and the ambient light intensity. The relative cone response is nonlinear to the input luminance. This is also changed by the ambient light intensity. Thus, to improve the perceived image, the displayed luminance is enhanced by lightness linearization. In this paper, the image's luminance is transformed by linearization of the response to the input luminance according to the ambient light intensity. Next, the displayed image is compensated according to the physically reduced chroma, resulting from flare phenomena. The reduced chroma value is calculated according to the flare for each intensity. The chroma compensation method to maintain the original image's chroma is applied differently for each hue plane, as the flare affects each hue plane differently. At this time, the enhanced chroma also considers the gamut boundary. Based on experimental observations, the outer luminance-intensity generally ranges from 1,000 lux to 30,000 lux. Thus, in the case of an outdoor environment, i.e. greater than 1,000 lux, this study presents a color reproduction method based on an inverse cone response curve and flare condition. Consequently, the proposed algorithm improves the quality of the perceived image adaptive to an outdoor environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roessl, Ewald; Ziegler, Andy; Proksa, Roland
2007-03-15
In conventional dual-energy systems, two transmission measurements with distinct spectral characteristics are performed. These measurements are used to obtain the line integrals of two basis material densities. Usually, the measurement process is such that the two measured signals can be treated as independent and therefore uncorrelated. Recently, however, a readout system for x-ray detectors has been introduced for which this is no longer the case. The readout electronics is designed to obtain simultaneous measurements of the total number of photons N and the total energy E they deposit in the sensor material. Practically, this is realized by a signal replicationmore » and separate counting and integrating processing units. Since the quantities N and E are (electronically) derived from one and the same physical sensor signal, they are statistically correlated. Nevertheless, the pair N and E can be used to perform a dual-energy processing following the well-known approach by Alvarez and Macovski. Formally, this means that N is to be identified with the first dual-energy measurement M{sub 1} and E with the second measurement M{sub 2}. In the presence of input correlations between M{sub 1}=N and M{sub 2}=E, however, the corresponding analytic expressions for the basis image noise have to be modified. The main observation made in this paper is that for positively correlated data, as is the case for the simultaneous counting and integrating device mentioned above, the basis image noise is suppressed through the influence of the covariance between the two signals. We extend the previously published relations for the basis image noise to the case where the original measurements are not independent and illustrate the importance of the input correlations by comparing dual-energy basis image noise resulting from the device mentioned above and a device measuring the photon numbers and the deposited energies consecutively.« less
Functional evaluation of telemedicine with super high definition images and B-ISDN.
Takeda, H; Matsumura, Y; Okada, T; Kuwata, S; Komori, M; Takahashi, T; Minatom, K; Hashimoto, T; Wada, M; Fujio, Y
1998-01-01
In order to determine whether a super high definition (SHD) image running at a series of 2048 resolution x 2048 line x 60 frame/sec was capable of telemedicine, we established a filing system for medical images and two experiments for transmission of high quality images were performed. All images of various types, produced from one case of ischemic heart disease were digitized and registered into the filing system. Images consisted of plain chest x-ray, electrocardiogram, ultrasound cardiogram, cardiac scintigram, coronary angiogram, left ventriculogram and so on. All images were animated and totaled a number of 243. We prepared a graphic user interface (GUI) for image retrieval based on the medical events and modalities. Twenty one cardiac specialists evaluated quality of the SHD images to be somewhat poor compared to the original pictures but sufficient for making diagnoses, and effective as a tool for teaching and case study purposes. The system capability of simultaneously displaying several animated images was especially deemed effective in grasping comprehension of diagnosis. Efficient input methods and creating capacity of filing all produced images are future issue. Using B-ISDN network, the SHD file was prefetched to the servers at Kyoto University Hospital and BBCC (Bradband ISDN Business chance & Culture Creation) laboratory as an telemedicine experiment. Simultaneous video conference system, the control of image retrieval and pointing function made the teleconference successful in terms of high quality of medical images, quick response time and interactive data exchange.
The Research of Spectral Reconstruction for Large Aperture Static Imaging Spectrometer
NASA Astrophysics Data System (ADS)
Lv, H.; Lee, Y.; Liu, R.; Fan, C.; Huang, Y.
2018-04-01
Imaging spectrometer obtains or indirectly obtains the spectral information of the ground surface feature while obtaining the target image, which makes the imaging spectroscopy has a prominent advantage in fine characterization of terrain features, and is of great significance for the study of geoscience and other related disciplines. Since the interference data obtained by interferometric imaging spectrometer is intermediate data, which must be reconstructed to achieve the high quality spectral data and finally used by users. The difficulty to restrict the application of interferometric imaging spectroscopy is to reconstruct the spectrum accurately. Based on the original image acquired by Large Aperture Static Imaging Spectrometer as the input, this experiment selected the pixel that is identified as crop by artificial recognition, extract and preprocess the interferogram to recovery the corresponding spectrum of this pixel. The result shows that the restructured spectrum formed a small crest near the wavelength of 0.55 μm with obvious troughs on both sides. The relative reflection intensity of the restructured spectrum rises abruptly at the wavelength around 0.7 μm, forming a steep slope. All these characteristics are similar with the spectral reflection curve of healthy green plants. It can be concluded that the experimental result is consistent with the visual interpretation results, thus validating the effectiveness of the scheme for interferometric imaging spectrum reconstruction proposed in this paper.
Color image segmentation with support vector machines: applications to road signs detection.
Cyganek, Bogusław
2008-08-01
In this paper we propose efficient color segmentation method which is based on the Support Vector Machine classifier operating in a one-class mode. The method has been developed especially for the road signs recognition system, although it can be used in other applications. The main advantage of the proposed method comes from the fact that the segmentation of characteristic colors is performed not in the original but in the higher dimensional feature space. By this a better data encapsulation with a linear hypersphere can be usually achieved. Moreover, the classifier does not try to capture the whole distribution of the input data which is often difficult to achieve. Instead, the characteristic data samples, called support vectors, are selected which allow construction of the tightest hypersphere that encloses majority of the input data. Then classification of a test data simply consists in a measurement of its distance to a centre of the found hypersphere. The experimental results show high accuracy and speed of the proposed method.
Quality metrics for sensor images
NASA Technical Reports Server (NTRS)
Ahumada, AL
1993-01-01
Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.
Comparison of acoustic fields produced by the original and upgraded HM-3 lithotripter
NASA Astrophysics Data System (ADS)
Zhou, Yufeng; Zhu, Songlin; Dreyer, Thomas; Liebler, Marko; Zhong, Pei
2003-10-01
To reduce tissue injury in shock wave lithotripsy (SWL) while maintaining satisfactory stone comminution, an original HM-3 lithotripter was upgraded by a reflector insert to suppress large intraluminal bubble expansion, which is a primary mechanism of vascular injury in SWL. The pressure waveforms produced by the original and upgraded HM-3 lithotripter were measured by using a fiber optical probe hydrophone (FOPH), which was scanned both along and transverse to the lithotripter axis at 1-mm step using a computer-controlled 3-D positioning system. At F2, the pressure waveform produced by the upgraded HM-3 lithotripter at 22 kV has a distinct dual-pulse structure, with a leading shock wave of ~45 MPa from the reflector insert and a 4-μs delayed second pulse of ~15 MPa reflected from the uncovered bottom surface of the original HM-3 reflector. The beam sizes of the original and upgraded HM-3 lithotripter are comparable in both axial and lateral directions. The pressure waveforms measured at the reflector aperture will be used as input to the KZK equation to predict the lithotripter shock wave at F2. Furthermore, bubble dynamics predicted by the Gilmore model will be compared with experimental observation by high-speed imaging. [Work supported by NIH.
Nguyen, T B; Cron, G O; Perdrizet, K; Bezzina, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Sinclair, J; Thornhill, R E; Foottit, C; Zanette, B; Cameron, I G
2015-11-01
Dynamic contrast-enhanced MR imaging parameters can be biased by poor measurement of the vascular input function. We have compared the diagnostic accuracy of dynamic contrast-enhanced MR imaging by using a phase-derived vascular input function and "bookend" T1 measurements with DSC MR imaging for preoperative grading of astrocytomas. This prospective study included 48 patients with a new pathologic diagnosis of an astrocytoma. Preoperative MR imaging was performed at 3T, which included 2 injections of 5-mL gadobutrol for dynamic contrast-enhanced and DSC MR imaging. During dynamic contrast-enhanced MR imaging, both magnitude and phase images were acquired to estimate plasma volume obtained from phase-derived vascular input function (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function (K(trans)_Φ) as well as plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K(trans)_SI). From DSC MR imaging, corrected relative CBV was computed. Four ROIs were placed over the solid part of the tumor, and the highest value among the ROIs was recorded. A Mann-Whitney U test was used to test for difference between grades. Diagnostic accuracy was assessed by using receiver operating characteristic analysis. Vp_ Φ and K(trans)_Φ values were lower for grade II compared with grade III astrocytomas (P < .05). Vp_SI and K(trans)_SI were not significantly different between grade II and grade III astrocytomas (P = .08-0.15). Relative CBV and dynamic contrast-enhanced MR imaging parameters except for K(trans)_SI were lower for grade III compared with grade IV (P ≤ .05). In differentiating low- and high-grade astrocytomas, we found no statistically significant difference in diagnostic accuracy between relative CBV and dynamic contrast-enhanced MR imaging parameters. In the preoperative grading of astrocytomas, the diagnostic accuracy of dynamic contrast-enhanced MR imaging parameters is similar to that of relative CBV. © 2015 by American Journal of Neuroradiology.
Reconstruction of Human Monte Carlo Geometry from Segmented Images
NASA Astrophysics Data System (ADS)
Zhao, Kai; Cheng, Mengyun; Fan, Yanchang; Wang, Wen; Long, Pengcheng; Wu, Yican
2014-06-01
Human computational phantoms have been used extensively for scientific experimental analysis and experimental simulation. This article presented a method for human geometry reconstruction from a series of segmented images of a Chinese visible human dataset. The phantom geometry could actually describe detailed structure of an organ and could be converted into the input file of the Monte Carlo codes for dose calculation. A whole-body computational phantom of Chinese adult female has been established by FDS Team which is named Rad-HUMAN with about 28.8 billion voxel number. For being processed conveniently, different organs on images were segmented with different RGB colors and the voxels were assigned with positions of the dataset. For refinement, the positions were first sampled. Secondly, the large sums of voxels inside the organ were three-dimensional adjacent, however, there were not thoroughly mergence methods to reduce the cell amounts for the description of the organ. In this study, the voxels on the organ surface were taken into consideration of the mergence which could produce fewer cells for the organs. At the same time, an indexed based sorting algorithm was put forward for enhancing the mergence speed. Finally, the Rad-HUMAN which included a total of 46 organs and tissues was described by the cuboids into the Monte Carlo Monte Carlo Geometry for the simulation. The Monte Carlo geometry was constructed directly from the segmented images and the voxels was merged exhaustively. Each organ geometry model was constructed without ambiguity and self-crossing, its geometry information could represent the accuracy appearance and precise interior structure of the organs. The constructed geometry largely retaining the original shape of organs could easily be described into different Monte Carlo codes input file such as MCNP. Its universal property was testified and high-performance was experimentally verified
Improved color constancy in honey bees enabled by parallel visual projections from dorsal ocelli.
Garcia, Jair E; Hung, Yu-Shan; Greentree, Andrew D; Rosa, Marcello G P; Endler, John A; Dyer, Adrian G
2017-07-18
How can a pollinator, like the honey bee, perceive the same colors on visited flowers, despite continuous and rapid changes in ambient illumination and background color? A hundred years ago, von Kries proposed an elegant solution to this problem, color constancy, which is currently incorporated in many imaging and technological applications. However, empirical evidence on how this method can operate on animal brains remains tenuous. Our mathematical modeling proposes that the observed spectral tuning of simple ocellar photoreceptors in the honey bee allows for the necessary input for an optimal color constancy solution to most natural light environments. The model is fully supported by our detailed description of a neural pathway allowing for the integration of signals originating from the ocellar photoreceptors to the information processing regions in the bee brain. These findings reveal a neural implementation to the classic color constancy problem that can be easily translated into artificial color imaging systems.
Image based book cover recognition and retrieval
NASA Astrophysics Data System (ADS)
Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine
2017-11-01
In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.
Ogier, Augustin; Sdika, Michael; Foure, Alexandre; Le Troter, Arnaud; Bendahan, David
2017-07-01
Manual and automated segmentation of individual muscles in magnetic resonance images have been recognized as challenging given the high variability of shapes between muscles and subjects and the discontinuity or lack of visible boundaries between muscles. In the present study, we proposed an original algorithm allowing a semi-automatic transversal propagation of manually-drawn masks. Our strategy was based on several ascending and descending non-linear registration approaches which is similar to the estimation of a Lagrangian trajectory applied to manual masks. Using several manually-segmented slices, we have evaluated our algorithm on the four muscles of the quadriceps femoris group. We mainly showed that our 3D propagated segmentation was very accurate with an averaged Dice similarity coefficient value higher than 0.91 for the minimal manual input of only two manually-segmented slices.
Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing
NASA Astrophysics Data System (ADS)
Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.
2009-05-01
A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.
Object knowledge changes visual appearance: semantic effects on color afterimages.
Lupyan, Gary
2015-10-01
According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models. Copyright © 2015 Elsevier B.V. All rights reserved.
Prototype Focal-Plane-Array Optoelectronic Image Processor
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi; Shaw, Timothy; Yu, Jeffrey
1995-01-01
Prototype very-large-scale integrated (VLSI) planar array of optoelectronic processing elements combines speed of optical input and output with flexibility of reconfiguration (programmability) of electronic processing medium. Basic concept of processor described in "Optical-Input, Optical-Output Morphological Processor" (NPO-18174). Performs binary operations on binary (black and white) images. Each processing element corresponds to one picture element of image and located at that picture element. Includes input-plane photodetector in form of parasitic phototransistor part of processing circuit. Output of each processing circuit used to modulate one picture element in output-plane liquid-crystal display device. Intended to implement morphological processing algorithms that transform image into set of features suitable for high-level processing; e.g., recognition.
Experimental image alignment system
NASA Technical Reports Server (NTRS)
Moyer, A. L.; Kowel, S. T.; Kornreich, P. G.
1980-01-01
A microcomputer-based instrument for image alignment with respect to a reference image is described which uses the DEFT sensor (Direct Electronic Fourier Transform) for image sensing and preprocessing. The instrument alignment algorithm which uses the two-dimensional Fourier transform as input is also described. It generates signals used to steer the stage carrying the test image into the correct orientation. This algorithm has computational advantages over algorithms which use image intensity data as input and is suitable for a microcomputer-based instrument since the two-dimensional Fourier transform is provided by the DEFT sensor.
Central representation of sensory inputs from the cardio-renal system in Aplysia depilans.
Rózsa, K S; Salánki, J; Véró, M; Kovacević, N; Konjevic, D
1980-01-01
Studying the central representation of sensory inputs originating from the heart in Aplysia depilans, it was found that: 1. Neurons responding to heart stimulation can be found in the abdominal, pedal and pleural ganglia alike. 2. The representation of heart input signals was more abundant in the left hemisphere of the abdominal ganglion and in the left pedal and pleural ganglia. 3. The giant neurons of Aplysia depilans can be compared to the homologous cells of Aplysia californica. Two motoneurons (RBHE, LDHI) and one interneuron (L10) proved to be identical in the two subspecies. 4. Sensory inputs originating from the heart may modify the pattern of both heart regulatory motoneurons and interneurons. 5. Nine giant and 19 small neurons of the abdominal ganglion, 3--3 neurons of the right and left pleural ganglion, 6 neurons of the left pedal ganglion responded to heart stimulation. 6. The bursting patterns of cells R15 and L4 were modified to tonic discharge in response to heart stimulation. 7. The representation of sensory inputs originating from the heart is scattered throughout the CNS of Aplysia depilans and heart regulation is based on a feedback mechanism similar to that found in other gastropod species.
NASA Astrophysics Data System (ADS)
Singh, Hukum
2016-12-01
A cryptosystem for securing image encryption is considered by using double random phase encoding in Fresnel wavelet transform (FWT) domain. Random phase masks (RPMs) and structured phase masks (SPMs) based on devil's vortex toroidal lens (DVTL) are used in spatial as well as in Fourier planes. The images to be encrypted are first Fresnel transformed and then single-level discrete wavelet transform (DWT) is apply to decompose LL,HL, LH and HH matrices. The resulting matrices from the DWT are multiplied by additional RPMs and the resultants are subjected to inverse DWT for the encrypted images. The scheme is more secure because of many parameters used in the construction of SPM. The original images are recovered by using the correct parameters of FWT and SPM. Phase mask SPM based on DVTL increases security that enlarges the key space for encryption and decryption. The proposed encryption scheme is a lens-less optical system and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The computed value of mean-squared-error between the retrieved and the input images shows the efficacy of scheme. The sensitivity to encryption parameters, robustness against occlusion, entropy and multiplicative Gaussian noise attacks have been analysed.
Single-image super-resolution based on Markov random field and contourlet transform
NASA Astrophysics Data System (ADS)
Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai
2011-04-01
Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.
NASA Astrophysics Data System (ADS)
Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy
2017-07-01
As the popularity of non-linear manifold learning techniques such as kernel PCA and Laplacian Eigenmaps grows, vast improvements have been seen in many areas of data processing, including heterogeneous data fusion and integration. One problem with the non-linear techniques, however, is the lack of an easily calculable pre-image. Existence of such pre-image would allow visualization of the fused data not only in the embedded space, but also in the original data space. The ability to make such comparisons can be crucial for data analysts and other subject matter experts who are the end users of novel mathematical algorithms. In this paper, we propose a pre-image algorithm for Laplacian Eigenmaps. Our method offers major improvements over existing techniques, which allow us to address the problem of noisy inputs and the issue of how to calculate the pre-image of a point outside the convex hull of training samples; both of which have been overlooked in previous studies in this field. We conclude by showing that our pre-image algorithm, combined with feature space rotations, allows us to recover occluded pixels of an imaging modality based off knowledge of that image measured by heterogeneous modalities. We demonstrate this data recovery on heterogeneous hyperspectral (HS) cameras, as well as by recovering LIDAR measurements from HS data.
Relationship between fatigue of generation II image intensifier and input illumination
NASA Astrophysics Data System (ADS)
Chen, Qingyou
1995-09-01
If there is fatigue for an image intesifier, then it has an effect on the imaging property of the night vision system. In this paper, using the principle of Joule Heat, we derive a mathematical formula for the generated heat of semiconductor photocathode. We describe the relationship among the various parameters in the formula. We also discuss reasons for the fatigue of Generation II image intensifier caused by bigger input illumination.
Improved automatic adjustment of density and contrast in FCR system using neural network
NASA Astrophysics Data System (ADS)
Takeo, Hideya; Nakajima, Nobuyoshi; Ishida, Masamitsu; Kato, Hisatoyo
1994-05-01
FCR system has an automatic adjustment of image density and contrast by analyzing the histogram of image data in the radiation field. Advanced image recognition methods proposed in this paper can improve the automatic adjustment performance, in which neural network technology is used. There are two methods. Both methods are basically used 3-layer neural network with back propagation. The image data are directly input to the input-layer in one method and the histogram data is input in the other method. The former is effective to the imaging menu such as shoulder joint in which the position of interest region occupied on the histogram changes by difference of positioning and the latter is effective to the imaging menu such as chest-pediatrics in which the histogram shape changes by difference of positioning. We experimentally confirm the validity of these methods (about the automatic adjustment performance) as compared with the conventional histogram analysis methods.
Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series
NASA Astrophysics Data System (ADS)
Champion, Nicolas
2016-06-01
Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.
Origin of information-limiting noise correlations
Kanitscheider, Ingmar; Coen-Cagli, Ruben; Pouget, Alexandre
2015-01-01
The ability to discriminate between similar sensory stimuli relies on the amount of information encoded in sensory neuronal populations. Such information can be substantially reduced by correlated trial-to-trial variability. Noise correlations have been measured across a wide range of areas in the brain, but their origin is still far from clear. Here we show analytically and with simulations that optimal computation on inputs with limited information creates patterns of noise correlations that account for a broad range of experimental observations while at same time causing information to saturate in large neural populations. With the example of a network of V1 neurons extracting orientation from a noisy image, we illustrate to our knowledge the first generative model of noise correlations that is consistent both with neurophysiology and with behavioral thresholds, without invoking suboptimal encoding or decoding or internal sources of variability such as stochastic network dynamics or cortical state fluctuations. We further show that when information is limited at the input, both suboptimal connectivity and internal fluctuations could similarly reduce the asymptotic information, but they have qualitatively different effects on correlations leading to specific experimental predictions. Our study indicates that noise at the sensory periphery could have a major effect on cortical representations in widely studied discrimination tasks. It also provides an analytical framework to understand the functional relevance of different sources of experimentally measured correlations. PMID:26621747
Building accurate historic and future climate MEPDG input files for Louisiana DOTD : tech summary.
DOT National Transportation Integrated Search
2017-02-01
The new pavement design process (originally MEPDG, then DARWin-ME, and now Pavement ME Design) requires two types : of inputs to infl uence the prediction of pavement distress for a selected set of pavement materials and structure. One input is : tra...
Veligdan, James T.
1997-01-01
An optical display includes a plurality of stacked optical waveguides having first and second opposite ends collectively defining an image input face and an image screen, respectively, with the screen being oblique to the input face. Each of the waveguides includes a transparent core bound by a cladding layer having a lower index of refraction for effecting internal reflection of image light transmitted into the input face to project an image on the screen, with each of the cladding layers including a cladding cap integrally joined thereto at the waveguide second ends. Each of the cores is beveled at the waveguide second end so that the cladding cap is viewable through the transparent core. Each of the cladding caps is black for absorbing external ambient light incident upon the screen for improving contrast of the image projected internally on the screen.
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Strack, Ruediger
1992-04-01
apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.
NASA Astrophysics Data System (ADS)
Zenian, Suzelawati; Ahmad, Tahir; Idris, Amidora
2017-09-01
Medical imaging is a subfield in image processing that deals with medical images. It is very crucial in visualizing the body parts in non-invasive way by using appropriate image processing techniques. Generally, image processing is used to enhance visual appearance of images for further interpretation. However, the pixel values of an image may not be precise as uncertainty arises within the gray values of an image due to several factors. In this paper, the input and output images of Flat Electroencephalography (fEEG) of an epileptic patient at varied time are presented. Furthermore, ordinary fuzzy and intuitionistic fuzzy approaches are implemented to the input images and the results are compared between these two approaches.
Input Scanners: A Growing Impact In A Diverse Marketplace
NASA Astrophysics Data System (ADS)
Marks, Kevin E.
1989-08-01
Just as newly invented photographic processes revolutionized the printing industry at the turn of the century, electronic imaging has affected almost every computer application today. To completely emulate traditionally mechanical means of information handling, computer based systems must be able to capture graphic images. Thus, there is a widespread need for the electronic camera, the digitizer, the input scanner. This paper will review how various types of input scanners are being used in many diverse applications. The following topics will be covered: - Historical overview of input scanners - New applications for scanners - Impact of scanning technology on select markets - Scanning systems issues
NASA Astrophysics Data System (ADS)
Bondareva, A. P.; Cheremkhin, P. A.; Evtikhiev, N. N.; Krasnov, V. V.; Starikov, S. N.
Scheme of optical image encryption with digital information input and dynamic encryption key based on two liquid crystal spatial light modulators and operating with spatially-incoherent monochromatic illumination is experimentally implemented. Results of experiments on images optical encryption and numerical decryption are presented. Satisfactory decryption error of 0.20÷0.27 is achieved.
Reconstruction of an input function from a dynamic PET water image using multiple tissue curves
NASA Astrophysics Data System (ADS)
Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro
2016-08-01
Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n = 29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around <8% and the calculated CBF values showed a tight correlation (r = 0.97). The simulation showed that errors associated with the assumed parameters were <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.
Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej
2011-08-01
A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. Copyright © 2011 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.
Chinese character recognition based on Gabor feature extraction and CNN
NASA Astrophysics Data System (ADS)
Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan
2018-03-01
As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.
Early detection of germinated wheat grains using terahertz image and chemometrics
NASA Astrophysics Data System (ADS)
Jiang, Yuying; Ge, Hongyi; Lian, Feiyu; Zhang, Yuan; Xia, Shanhong
2016-02-01
In this paper, we propose a feasible tool that uses a terahertz (THz) imaging system for identifying wheat grains at different stages of germination. The THz spectra of the main changed components of wheat grains, maltose and starch, which were obtained by THz time spectroscopy, were distinctly different. Used for original data compression and feature extraction, principal component analysis (PCA) revealed the changes that occurred in the inner chemical structure during germination. Two thresholds, one indicating the start of the release of α-amylase and the second when it reaches the steady state, were obtained through the first five score images. Thus, the first five PCs were input for the partial least-squares regression (PLSR), least-squares support vector machine (LS-SVM), and back-propagation neural network (BPNN) models, which were used to classify seven different germination times between 0 and 48 h, with a prediction accuracy of 92.85%, 93.57%, and 90.71%, respectively. The experimental results indicated that the combination of THz imaging technology and chemometrics could be a new effective way to discriminate wheat grains at the early germination stage of approximately 6 h.
Image-based query-by-example for big databases of galaxy images
NASA Astrophysics Data System (ADS)
Shamir, Lior; Kuminski, Evan
2017-01-01
Very large astronomical databases containing millions or even billions of galaxy images have been becoming increasingly important tools in astronomy research. However, in many cases the very large size makes it more difficult to analyze these data manually, reinforcing the need for computer algorithms that can automate the data analysis process. An example of such task is the identification of galaxies of a certain morphology of interest. For instance, if a rare galaxy is identified it is reasonable to expect that more galaxies of similar morphology exist in the database, but it is virtually impossible to manually search these databases to identify such galaxies. Here we describe computer vision and pattern recognition methodology that receives a galaxy image as an input, and searches automatically a large dataset of galaxies to return a list of galaxies that are visually similar to the query galaxy. The returned list is not necessarily complete or clean, but it provides a substantial reduction of the original database into a smaller dataset, in which the frequency of objects visually similar to the query galaxy is much higher. Experimental results show that the algorithm can identify rare galaxies such as ring galaxies among datasets of 10,000 astronomical objects.
Nielles-Vallespin, Sonia; Kellman, Peter; Hsu, Li-Yueh; Arai, Andrew E
2015-02-17
A low excitation flip angle (α < 10°) steady-state free precession (SSFP) proton-density (PD) reference scan is often used to estimate the B1-field inhomogeneity for surface coil intensity correction (SCIC) of the saturation-recovery (SR) prepared high flip angle (α = 40-50°) SSFP myocardial perfusion images. The different SSFP off-resonance response for these two flip angles might lead to suboptimal SCIC when there is a spatial variation in the background B0-field. The low flip angle SSFP-PD frames are more prone to parallel imaging banding artifacts in the presence of off-resonance. The use of FLASH-PD frames would eliminate both the banding artifacts and the uneven frequency response in the presence of off-resonance in the surface coil inhomogeneity estimate and improve homogeneity of semi-quantitative and quantitative perfusion measurements. B0-field maps, SSFP and FLASH-PD frames were acquired in 10 healthy volunteers to analyze the SSFP off-resonance response. Furthermore, perfusion scans preceded by both FLASH and SSFP-PD frames from 10 patients with no myocardial infarction were analyzed semi-quantitatively and quantitatively (rest n = 10 and stress n = 1). Intra-subject myocardial blood flow (MBF) coefficient of variation (CoV) over the whole left ventricle (LV), as well as intra-subject peak contrast (CE) and upslope (SLP) standard deviation (SD) over 6 LV sectors were investigated. In the 6 out of 10 cases where artifacts were apparent in the LV ROI of the SSFP-PD images, all three variability metrics were statistically significantly lower when using the FLASH-PD frames as input for the SCIC (CoVMBF-FLASH = 0.3 ± 0.1, CoVMBF-SSFP = 0.4 ± 0.1, p = 0.03; SDCE-FLASH = 10 ± 2, SDCE-SSFP = 32 ± 7, p = 0.01; SDSLP-FLASH = 0.02 ± 0.01, SDSLP-SSFP = 0.06 ± 0.02, p = 0.03). Example rest and stress data sets from the patient pool demonstrate that the low flip angle SSFP protocol can exhibit severe ghosting artifacts originating from off-resonance banding artifacts at the edges of the field of view that parallel imaging is not able to unfold. These artifacts lead to errors in the quantitative perfusion maps and the semi-quantitative perfusion indexes, such as false positives. It is shown that this can be avoided by using FLASH-PD frames as input for the SCIC. FLASH-PD images are recommended as input for SCIC of SSFP perfusion images instead of low flip angle SSFP-PD images.
Time reverse modeling of acoustic emissions in a reinforced concrete beam.
Kocur, Georg Karl; Saenger, Erik H; Grosse, Christian U; Vogel, Thomas
2016-02-01
The time reverse modeling (TRM) is applied for signal-based acoustic emission (AE) analysis of reinforced concrete (RC) specimens. TRM uses signals obtained from physical experiments as input. The signals are re-emitted numerically into a structure in a time-reversed manner, where the wavefronts interfere and appear as dominant concentrations of energy at the origin of the AE. The experimental and numerical results presented for selected AE signals confirm that TRM is capable of localizing AE activity in RC caused by concrete cracking. The accuracy of the TRM results is corroborated by three-dimensional crack distributions obtained from X-ray computed tomography images. Copyright © 2015 Elsevier B.V. All rights reserved.
Photographic film image enhancement
NASA Technical Reports Server (NTRS)
Horner, J. L.
1975-01-01
A series of experiments were undertaken to assess the feasibility of defogging color film by the techniques of optical spatial filtering. A coherent optical processor was built using red, blue, and green laser light input and specially designed Fourier transformation lenses. An array of spatial filters was fabricated on black and white emulsion slides using the coherent optical processor. The technique was first applied to laboratory white light fogged film, and the results were successful. However, when the same technique was applied to some original Apollo X radiation fogged color negatives, the results showed no similar restoration. Examples of each experiment are presented and possible reasons for the lack of restoration in the Apollo films are discussed.
MEDOF - MINIMUM EUCLIDEAN DISTANCE OPTIMAL FILTER
NASA Technical Reports Server (NTRS)
Barton, R. S.
1994-01-01
The Minimum Euclidean Distance Optimal Filter program, MEDOF, generates filters for use in optical correlators. The algorithm implemented in MEDOF follows theory put forth by Richard D. Juday of NASA/JSC. This program analytically optimizes filters on arbitrary spatial light modulators such as coupled, binary, full complex, and fractional 2pi phase. MEDOF optimizes these modulators on a number of metrics including: correlation peak intensity at the origin for the centered appearance of the reference image in the input plane, signal to noise ratio including the correlation detector noise as well as the colored additive input noise, peak to correlation energy defined as the fraction of the signal energy passed by the filter that shows up in the correlation spot, and the peak to total energy which is a generalization of PCE that adds the passed colored input noise to the input image's passed energy. The user of MEDOF supplies the functions that describe the following quantities: 1) the reference signal, 2) the realizable complex encodings of both the input and filter SLM, 3) the noise model, possibly colored, as it adds at the reference image and at the correlation detection plane, and 4) the metric to analyze, here taken to be one of the analytical ones like SNR (signal to noise ratio) or PCE (peak to correlation energy) rather than peak to secondary ratio. MEDOF calculates filters for arbitrary modulators and a wide range of metrics as described above. MEDOF examines the statistics of the encoded input image's noise (if SNR or PCE is selected) and the filter SLM's (Spatial Light Modulator) available values. These statistics are used as the basis of a range for searching for the magnitude and phase of k, a pragmatically based complex constant for computing the filter transmittance from the electric field. The filter is produced for the mesh points in those ranges and the value of the metric that results from these points is computed. When the search is concluded, the values of amplitude and phase for the k whose metric was largest, as well as consistency checks, are reported. A finer search can be done in the neighborhood of the optimal k if desired. The filter finally selected is written to disk in terms of drive values, not in terms of the filter's complex transmittance. Optionally, the impulse response of the filter may be created to permit users to examine the response for the features the algorithm deems important to the recognition process under the selected metric, limitations of the filter SLM, etc. MEDOF uses the filter SLM to its greatest potential, therefore filter competence is not compromised for simplicity of computation. MEDOF is written in C-language for Sun series computers running SunOS. With slight modifications, it has been implemented on DEC VAX series computers using the DEC-C v3.30 compiler, although the documentation does not currently support this platform. MEDOF can also be compiled using Borland International Inc.'s Turbo C++ v1.0, but IBM PC memory restrictions greatly reduce the maximum size of the reference images from which the filters can be calculated. MEDOF requires a two dimensional Fast Fourier Transform (2DFFT). One 2DFFT routine which has been used successfully with MEDOF is a routine found in "Numerical Recipes in C: The Art of Scientific Programming," which is available from Cambridge University Press, New Rochelle, NY 10801. The standard distribution medium for MEDOF is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. MEDOF was developed in 1992-1993.
Three-dimensional image display system using stereogram and holographic optical memory techniques
NASA Astrophysics Data System (ADS)
Kim, Cheol S.; Kim, Jung G.; Shin, Chang-Mok; Kim, Soo-Joong
2001-09-01
In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH (binary phase hologram) and LCD (liquid crystal display) for controlling reference beam. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. The reference beams are acquired by Fourier transform of BPH which designed with SA (simulated annealing) algorithm, and represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. In output plane, we used a LCD shutter that is synchronized to a monitor that displays alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO3 repeatedly using holographic optical memory techniques.
Implementation of real-time digital endoscopic image processing system
NASA Astrophysics Data System (ADS)
Song, Chul Gyu; Lee, Young Mook; Lee, Sang Min; Kim, Won Ky; Lee, Jae Ho; Lee, Myoung Ho
1997-10-01
Endoscopy has become a crucial diagnostic and therapeutic procedure in clinical areas. Over the past four years, we have developed a computerized system to record and store clinical data pertaining to endoscopic surgery of laparascopic cholecystectomy, pelviscopic endometriosis, and surgical arthroscopy. In this study, we developed a computer system, which is composed of a frame grabber, a sound board, a VCR control board, a LAN card and EDMS. Also, computer system controls peripheral instruments such as a color video printer, a video cassette recorder, and endoscopic input/output signals. Digital endoscopic data management system is based on open architecture and a set of widely available industry standards; namely Microsoft Windows as an operating system, TCP/IP as a network protocol and a time sequential database that handles both images and speech. For the purpose of data storage, we used MOD and CD- R. Digital endoscopic system was designed to be able to store, recreate, change, and compress signals and medical images. Computerized endoscopy enables us to generate and manipulate the original visual document, making it accessible to a virtually unlimited number of physicians.
Imaging of a Defect in Thin Plates Using the Time Reversal of Single Mode Lamb Waves
NASA Astrophysics Data System (ADS)
Jeong, Hyunjo; Lee, Jung-Sik; Bae, Sung-Min
2011-06-01
This paper presents an analytical investigation for a baseline-free imaging of a defect in plate-like structures using the time-reversal of Lamb waves. We first consider the flexural wave (A0 mode) propagation in a plate containing a defect, and reception and time reversal process of the output signal at the receiver. The received output signal is then composed of two parts: a directly propagated wave and a scattered wave from the defect. The time reversal of these waves recovers the original input signal, and produces two additional sidebands that contain the time-of-flight information on the defect location. One of the side band signals is then extracted as a pure defect signal. A defect localization image is then constructed from a beamforming technique based on the time-frequency analysis of the side band signal for each transducer pair in a network of sensors. The simulation results show that the proposed scheme enables the accurate, baseline-free detection of a defect, so that experimental studies are needed to verify the proposed method and to be applied to real structure.
What the bird’s brain tells the bird’s eye: the function of descending input to the avian retina
Wilson, Martin; Lindstrom, Sarah H.
2012-01-01
As Cajal discovered in the late 19th century, the bird retina receives a substantial input from the brain. Approximately 10,000 fibers originating in a small midbrain nucleus, the isthmo-optic nucleus, terminate in each retina. The input to the isthmo-optic nucleus is chiefly from the optic tectum which, in the bird, is the primary recipient of retinal input. These neural elements constitute a closed loop, the Centrifugal Visual System (CVS), beginning and ending in the retina, that delivers positive feedback to active ganglion cells. Several features of the system are puzzling. All fibers from the isthmo-optic nucleus terminate in the ventral retina and an unusual axon-bearing amacrine cell, the Target Cell, is the postsynaptic partner of these fibers. While the rest of the CVS is orderly and retinotopic, Target Cell axons project seemingly at random, mostly to distant parts of the retina. We review here the most significant features of the anatomy and physiology of the CVS with a view to understanding its function. We suggest that many of the facts about this system, including some that are otherwise difficult to explain, can be accommodated within the hypothesis that the images of shadows cast on the ground or on objects in the environment, initiate a rapid and parallel search of the sky for a possible aerial predator. If a predator is located, shadow and predator would be temporarily linked together and tracked by the CVS. PMID:21524338
Vector generator scan converter
Moore, J.M.; Leighton, J.F.
1988-02-05
High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold. 7 figs.
NASA Astrophysics Data System (ADS)
Sui, Liansheng; Xu, Minjie; Tian, Ailing
2017-04-01
A novel optical image encryption scheme is proposed based on quick response code and high dimension chaotic system, where only the intensity distribution of encoded information is recorded as ciphertext. Initially, the quick response code is engendered from the plain image and placed in the input plane of the double random phase encoding architecture. Then, the code is encrypted to the ciphertext with noise-like distribution by using two cascaded gyrator transforms. In the process of encryption, the parameters such as rotation angles and random phase masks are generated as interim variables and functions based on Chen system. A new phase retrieval algorithm is designed to reconstruct the initial quick response code in the process of decryption, in which a priori information such as three position detection patterns is used as the support constraint. The original image can be obtained without any energy loss by scanning the decrypted code with mobile devices. The ciphertext image is the real-valued function which is more convenient for storing and transmitting. Meanwhile, the security of the proposed scheme is enhanced greatly due to high sensitivity of initial values of Chen system. Extensive cryptanalysis and simulation have performed to demonstrate the feasibility and effectiveness of the proposed scheme.
Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU
NASA Astrophysics Data System (ADS)
Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee
2013-02-01
3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.
Arbitrary waveform generator to improve laser diode driver performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulkerson, Jr, Edward Steven
2015-11-03
An arbitrary waveform generator modifies the input signal to a laser diode driver circuit in order to reduce the overshoot/undershoot and provide a "flat-top" signal to the laser diode driver circuit. The input signal is modified based on the original received signal and the feedback from the laser diode by measuring the actual current flowing in the laser diode after the original signal is applied to the laser diode.
Robust Mapping of Incoherent Fiber-Optic Bundles
NASA Technical Reports Server (NTRS)
Roberts, Harry E.; Deason, Brent E.; DePlachett, Charles P.; Pilgrim, Robert A.; Sanford, Harold S.
2007-01-01
A method and apparatus for mapping between the positions of fibers at opposite ends of incoherent fiber-optic bundles have been invented to enable the use of such bundles to transmit images in visible or infrared light. The method is robust in the sense that it provides useful mapping even for a bundle that contains thousands of narrow, irregularly packed fibers, some of which may be defective. In a coherent fiber-optic bundle, the input and output ends of each fiber lie at identical positions in the input and output planes; therefore, the bundle can be used to transmit images without further modification. Unfortunately, the fabrication of coherent fiber-optic bundles is too labor-intensive and expensive for many applications. An incoherent fiber-optic bundle can be fabricated more easily and at lower cost, but it produces a scrambled image because the position of the end of each fiber in the input plane is generally different from the end of the same fiber in the output plane. However, the image transmitted by an incoherent fiber-optic bundle can be unscrambled (or, from a different perspective, decoded) by digital processing of the output image if the mapping between the input and output fiber-end positions is known. Thus, the present invention enables the use of relatively inexpensive fiber-optic bundles to transmit images.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
Novel view synthesis by interpolation over sparse examples
NASA Astrophysics Data System (ADS)
Liang, Bodong; Chung, Ronald C.
2006-01-01
Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.
NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2017-09-01
A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.
Estimating atmospheric parameters and reducing noise for multispectral imaging
Conger, James Lynn
2014-02-25
A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.
Automatic Boosted Flood Mapping from Satellite Data
NASA Technical Reports Server (NTRS)
Coltin, Brian; McMichael, Scott; Smith, Trey; Fong, Terrence
2016-01-01
Numerous algorithms have been proposed to map floods from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, most require human input to succeed, either to specify a threshold value or to manually annotate training data. We introduce a new algorithm based on Adaboost which effectively maps floods without any human input, allowing for a truly rapid and automatic response. The Adaboost algorithm combines multiple thresholds to achieve results comparable to state-of-the-art algorithms which do require human input. We evaluate Adaboost, as well as numerous previously proposed flood mapping algorithms, on multiple MODIS flood images, as well as on hundreds of non-flood MODIS lake images, demonstrating its effectiveness across a wide variety of conditions.
NASA Astrophysics Data System (ADS)
Kage, Andreas; Canto, Marcia; Gorospe, Emmanuel; Almario, Antonio; Münzenmayer, Christian
2010-03-01
In the near future, Computer Assisted Diagnosis (CAD) which is well known in the area of mammography might be used to support clinical experts in the diagnosis of images derived from imaging modalities such as endoscopy. In the recent past, a few first approaches for computer assisted endoscopy have been presented already. These systems use a video signal as an input that is provided by the endoscopes video processor. Despite the advent of high-definition systems most standard endoscopy systems today still provide only analog video signals. These signals consist of interlaced images that can not be used in a CAD approach without deinterlacing. Of course, there are many different deinterlacing approaches known today. But most of them are specializations of some basic approaches. In this paper we present four basic deinterlacing approaches. We have used a database of non-interlaced images which have been degraded by artificial interlacing and afterwards processed by these approaches. The database contains regions of interest (ROI) of clinical relevance for the diagnosis of abnormalities in the esophagus. We compared the classification rates on these ROIs on the original images and after the deinterlacing. The results show that the deinterlacing has an impact on the classification rates. The Bobbing approach and the Motion Compensation approach achieved the best classification results in most cases.
Facilitating Analysis of Multiple Partial Data Streams
NASA Technical Reports Server (NTRS)
Maimone, Mark W.; Liebersbach, Robert R.
2008-01-01
Robotic Operations Automation: Mechanisms, Imaging, Navigation report Generation (ROAMING) is a set of computer programs that facilitates and accelerates both tactical and strategic analysis of time-sampled data especially the disparate and often incomplete streams of Mars Explorer Rover (MER) telemetry data described in the immediately preceding article. As used here, tactical refers to the activities over a relatively short time (one Martian day in the original MER application) and strategic refers to a longer time (the entire multi-year MER missions in the original application). Prior to installation, ROAMING must be configured with the types of data of interest, and parsers must be modified to understand the format of the input data (many example parsers are provided, including for general CSV files). Thereafter, new data from multiple disparate sources are automatically resampled into a single common annotated spreadsheet stored in a readable space-separated format, and these data can be processed or plotted at any time scale. Such processing or plotting makes it possible to study not only the details of a particular activity spanning only a few seconds, but also longer-term trends. ROAMING makes it possible to generate mission-wide plots of multiple engineering quantities [e.g., vehicle tilt as in Figure 1(a), motor current, numbers of images] that, heretofore could be found only in thousands of separate files. ROAMING also supports automatic annotation of both images and graphs. In the MER application, labels given to terrain features by rover scientists and engineers are automatically plotted in all received images based on their associated camera models (see Figure 2), times measured in seconds are mapped to Mars local time, and command names or arbitrary time-labeled events can be used to label engineering plots, as in Figure 1(b).
Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee
2017-07-01
Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.
Real-time edge-enhanced optical correlator
NASA Technical Reports Server (NTRS)
Liu, Tsuen-Hsi (Inventor); Cheng, Li-Jen (Inventor)
1992-01-01
Edge enhancement of an input image by four-wave mixing a first write beam with a second write beam in a photorefractive crystal, GaAs, was achieved for VanderLugt optical correlation with an edge enhanced reference image by optimizing the power ratio of a second write beam to the first write beam (70:1) and optimizing the power ratio of a read beam, which carries the reference image to the first write beam (100:701). Liquid crystal TV panels are employed as spatial light modulators to change the input and reference images in real time.
NASA Astrophysics Data System (ADS)
Wang, Hai-Yan; Song, Chao; Sha, Min; Liu, Jun; Li, Li-Ping; Zhang, Zheng-Yong
2018-05-01
Raman spectra and ultraviolet-visible absorption spectra of four different geographic origins of Radix Astragali were collected. These data were analyzed using kernel principal component analysis combined with sparse representation classification. The results showed that the recognition rate reached 70.44% using Raman spectra for data input and 90.34% using ultraviolet-visible absorption spectra for data input. A new fusion method based on Raman combined with ultraviolet-visible data was investigated and the recognition rate was increased to 96.43%. The experimental results suggested that the proposed data fusion method effectively improved the utilization rate of the original data.
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network
NASA Astrophysics Data System (ADS)
Zhang, J.; Zhang, J.; Zhao, Z.
2018-04-01
Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.
NASA Astrophysics Data System (ADS)
Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.
2009-08-01
Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.
Switching non-local vector median filter
NASA Astrophysics Data System (ADS)
Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji
2016-04-01
This paper describes a novel image filtering method that removes random-valued impulse noise superimposed on a natural color image. In impulse noise removal, it is essential to employ a switching-type filtering method, as used in the well-known switching median filter, to preserve the detail of an original image with good quality. In color image filtering, it is generally preferable to deal with the red (R), green (G), and blue (B) components of each pixel of a color image as elements of a vectorized signal, as in the well-known vector median filter, rather than as component-wise signals to prevent a color shift after filtering. By taking these fundamentals into consideration, we propose a switching-type vector median filter with non-local processing that mainly consists of a noise detector and a noise removal filter. Concretely, we propose a noise detector that proactively detects noise-corrupted pixels by focusing attention on the isolation tendencies of pixels of interest not in an input image but in difference images between RGB components. Furthermore, as the noise removal filter, we propose an extended version of the non-local median filter, we proposed previously for grayscale image processing, named the non-local vector median filter, which is designed for color image processing. The proposed method realizes a superior balance between the preservation of detail and impulse noise removal by proactive noise detection and non-local switching vector median filtering, respectively. The effectiveness and validity of the proposed method are verified in a series of experiments using natural color images.
Recall of patterns using binary and gray-scale autoassociative morphological memories
NASA Astrophysics Data System (ADS)
Sussner, Peter
2005-08-01
Morphological associative memories (MAM's) belong to a class of artificial neural networks that perform the operations erosion or dilation of mathematical morphology at each node. Therefore we speak of morphological neural networks. Alternatively, the total input effect on a morphological neuron can be expressed in terms of lattice induced matrix operations in the mathematical theory of minimax algebra. Neural models of associative memories are usually concerned with the storage and the retrieval of binary or bipolar patterns. Thus far, the emphasis in research on morphological associative memory systems has been on binary models, although a number of notable features of autoassociative morphological memories (AMM's) such as optimal absolute storage capacity and one-step convergence have been shown to hold in the general, gray-scale setting. In previous papers, we gained valuable insight into the storage and recall phases of AMM's by analyzing their fixed points and basins of attraction. We have shown in particular that the fixed points of binary AMM's correspond to the lattice polynomials in the original patterns. This paper extends these results in the following ways. In the first place, we provide an exact characterization of the fixed points of gray-scale AMM's in terms of combinations of the original patterns. Secondly, we present an exact expression for the fixed point attractor that represents the output of either a binary or a gray-scale AMM upon presentation of a certain input. The results of this paper are confirmed in several experiments using binary patterns and gray-scale images.
Hsu, Wei-Feng; Lin, Shih-Chih
2018-01-01
This paper presents a novel approach to optimizing the design of phase-only computer-generated holograms (CGH) for the creation of binary images in an optical Fourier transform system. Optimization begins by selecting an image pixel with a temporal change in amplitude. The modulated image function undergoes an inverse Fourier transform followed by the imposition of a CGH constraint and the Fourier transform to yield an image function associated with the change in amplitude of the selected pixel. In iterations where the quality of the image is improved, that image function is adopted as the input for the next iteration. In cases where the image quality is not improved, the image function before the pixel changed is used as the input. Thus, the proposed approach is referred to as the pixelwise hybrid input-output (PHIO) algorithm. The PHIO algorithm was shown to achieve image quality far exceeding that of the Gerchberg-Saxton (GS) algorithm. The benefits were particularly evident when the PHIO algorithm was equipped with a dynamic range of image intensities equivalent to the amplitude freedom of the image signal. The signal variation of images reconstructed from the GS algorithm was 1.0223, but only 0.2537 when using PHIO, i.e., a 75% improvement. Nonetheless, the proposed scheme resulted in a 10% degradation in diffraction efficiency and signal-to-noise ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Charles R.; Weck, Philippe F.; Vaughn, Palmer
Report RWEV-REP-001, Analysis of Postclosure Groundwater Impacts for a Geologic Repository for the Disposal of Spent Nuclear Fuel and High Level Radioactive Waste at Yucca Mountain, Nye County, Nevada was issued by the DOE in 2009 and is currently being updated. Sandia National Laboratories (SNL) provided support for the original document, performing calculations and extracting data from the Yucca Mountain Performance Assessment Model that were used as inputs to the contaminant transport and dose calculations by Jason Associates Corporation, the primary developers of the DOE report. The inputs from SNL were documented in LSA-AR-037, Inputs to Jason Associates Corporation inmore » Support of the Postclosure Repository Supplemental Environmental Impact Statement. To support the updating of the original Groundwater Impacts document, SNL has reviewed the inputs provided in LSA-AR-037 to verify that they are current and appropriate for use. The results of that assessment are documented here.« less
Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite
NASA Astrophysics Data System (ADS)
Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi
2018-05-01
LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.
Neural network diagnosis of avascular necrosis from magnetic resonance images
NASA Astrophysics Data System (ADS)
Manduca, Armando; Christy, Paul S.; Ehman, Richard L.
1993-09-01
We have explored the use of artificial neural networks to diagnose avascular necrosis (AVN) of the femoral head from magnetic resonance images. We have developed multi-layer perceptron networks, trained with conjugate gradient optimization, which diagnose AVN from single sagittal images of the femoral head with 100% accuracy on the training data and 97% accuracy on test data. These networks use only the raw image as input (with minimal preprocessing to average the images down to 32 X 32 size and to scale the input data values) and learn to extract their own features for the diagnosis decision. Various experiments with these networks are described.
NASA Technical Reports Server (NTRS)
Cecil, R. W.; White, R. A.; Szczur, M. R.
1972-01-01
The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.
Detection and segmentation of multiple touching product inspection items
NASA Astrophysics Data System (ADS)
Casasent, David P.; Talukder, Ashit; Cox, Westley; Chang, Hsuan-Ting; Weber, David
1996-12-01
X-ray images of pistachio nuts on conveyor trays for product inspection are considered. The first step in such a processor is to locate each individual item and place it in a separate file for input to a classifier to determine the quality of each nut. This paper considers new techniques to: detect each item (each nut can be in any orientation, we employ new rotation-invariant filters to locate each item independent of its orientation), produce separate image files for each item [a new blob coloring algorithm provides this for isolated (non-touching) input items], segmentation to provide separate image files for touching or overlapping input items (we use a morphological watershed transform to achieve this), and morphological processing to remove the shell and produce an image of only the nutmeat. Each of these operations and algorithms are detailed and quantitative data for each are presented for the x-ray image nut inspection problem noted. These techniques are of general use in many different product inspection problems in agriculture and other areas.
[Discrimination of donkey meat by NIR and chemometrics].
Niu, Xiao-Ying; Shao, Li-Min; Dong, Fang; Zhao, Zhi-Lei; Zhu, Yan
2014-10-01
Donkey meat samples (n = 167) from different parts of donkey body (neck, costalia, rump, and tendon), beef (n = 47), pork (n = 51) and mutton (n = 32) samples were used to establish near-infrared reflectance spectroscopy (NIR) classification models in the spectra range of 4,000~12,500 cm(-1). The accuracies of classification models constructed by Mahalanobis distances analysis, soft independent modeling of class analogy (SIMCA) and least squares-support vector machine (LS-SVM), respectively combined with pretreatment of Savitzky-Golay smooth (5, 15 and 25 points) and derivative (first and second), multiplicative scatter correction and standard normal variate, were compared. The optimal models for intact samples were obtained by Mahalanobis distances analysis with the first 11 principal components (PCs) from original spectra as inputs and by LS-SVM with the first 6 PCs as inputs, and correctly classified 100% of calibration set and 98. 96% of prediction set. For minced samples of 7 mm diameter the optimal result was attained by LS-SVM with the first 5 PCs from original spectra as inputs, which gained an accuracy of 100% for calibration and 97.53% for prediction. For minced diameter of 5 mm SIMCA model with the first 8 PCs from original spectra as inputs correctly classified 100% of calibration and prediction. And for minced diameter of 3 mm Mahalanobis distances analysis and SIMCA models both achieved 100% accuracy for calibration and prediction respectively with the first 7 and 9 PCs from original spectra as inputs. And in these models, donkey meat samples were all correctly classified with 100% either in calibration or prediction. The results show that it is feasible that NIR with chemometrics methods is used to discriminate donkey meat from the else meat.
Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung
2018-01-01
In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447
Using virtualization to protect the proprietary material science applications in volunteer computing
NASA Astrophysics Data System (ADS)
Khrapov, Nikolay P.; Rozen, Valery V.; Samtsevich, Artem I.; Posypkin, Mikhail A.; Sukhomlin, Vladimir A.; Oganov, Artem R.
2018-04-01
USPEX is a world-leading software for computational material design. In essence, USPEX splits simulation into a large number of workunits that can be processed independently. This scheme ideally fits the desktop grid architecture. Workunit processing is done by a simulation package aimed at energy minimization. Many of such packages are proprietary and should be protected from unauthorized access when running on a volunteer PC. In this paper we present an original approach based on virtualization. In a nutshell, the proprietary code and input files are stored in an encrypted folder and run inside a virtual machine image that is also password protected. The paper describes this approach in detail and discusses its application in USPEX@home volunteer project.
Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald
2007-05-01
(R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).
Wu, Chensheng; Ko, Jonathan; Rzasa, John R; Paulson, Daniel A; Davis, Christopher C
2018-03-20
We find that ideas in optical image encryption can be very useful for adaptive optics in achieving simultaneous phase and amplitude shaping of a laser beam. An adaptive optics system with simultaneous phase and amplitude shaping ability is very desirable for atmospheric turbulence compensation. Atmospheric turbulence-induced beam distortions can jeopardize the effectiveness of optical power delivery for directed-energy systems and optical information delivery for free-space optical communication systems. In this paper, a prototype adaptive optics system is proposed based on a famous image encryption structure. The major change is to replace the two random phase plates at the input plane and Fourier plane of the encryption system, respectively, with two deformable mirrors that perform on-demand phase modulations. A Gaussian beam is used as an input to replace the conventional image input. We show through theory, simulation, and experiments that the slightly modified image encryption system can be used to achieve arbitrary phase and amplitude beam shaping within the limits of stroke range and influence function of the deformable mirrors. In application, the proposed technique can be used to perform mode conversion between optical beams, generate structured light signals for imaging and scanning, and compensate atmospheric turbulence-induced phase and amplitude beam distortions.
Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K
2017-05-01
In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.
Distinguishing Representations as Origin and Representations as Input: Roles for Individual Neurons.
Edwards, Jonathan C W
2016-01-01
It is widely perceived that there is a problem in giving a naturalistic account of mental representation that deals adequately with the issue of meaning, interpretation, or significance (semantic content). It is suggested here that this problem may arise partly from the conflation of two vernacular senses of representation: representation-as-origin and representation-as-input. The flash of a neon sign may in one sense represent a popular drink, but to function as a representation it must provide an input to a 'consumer' in the street. The arguments presented draw on two principles - the neuron doctrine and the need for a venue for 'presentation' or 'reception' of a representation at a specified site, consistent with the locality principle. It is also argued that domains of representation cannot be defined by signal traffic, since they can be expected to include 'null' elements based on non-firing cells. In this analysis, mental representations-as-origin are distributed patterns of cell firing. Each firing cell is given semantic value in its own right - some form of atomic propositional significance - since different axonal branches may contribute to integration with different populations of signals at different downstream sites. Representations-as-input are patterns of local co-arrival of signals in the form of synaptic potentials in dendrites. Meaning then draws on the relationships between active and null inputs, forming 'scenarios' comprising a molecular combination of 'premises' from which a new output with atomic propositional significance is generated. In both types of representation, meaning, interpretation or significance pivots on events in an individual cell. (This analysis only applies to 'occurrent' representations based on current neural activity.) The concept of representations-as-input emphasizes the need for an internal 'consumer' of a representation and the dependence of meaning on the co-relationships involved in an input interaction between signals and consumer. The acceptance of this necessity provides a basis for resolving the problem that representations appear both as distributed (representation-as-origin) and as local (representation-as-input). The key implications are that representations in the brain are massively multiple both in series and in parallel, and that individual cells play specific semantic roles. These roles are discussed in relation to traditional concepts of 'gnostic' cell types.
Distinguishing Representations as Origin and Representations as Input: Roles for Individual Neurons
Edwards, Jonathan C. W.
2016-01-01
It is widely perceived that there is a problem in giving a naturalistic account of mental representation that deals adequately with the issue of meaning, interpretation, or significance (semantic content). It is suggested here that this problem may arise partly from the conflation of two vernacular senses of representation: representation-as-origin and representation-as-input. The flash of a neon sign may in one sense represent a popular drink, but to function as a representation it must provide an input to a ‘consumer’ in the street. The arguments presented draw on two principles – the neuron doctrine and the need for a venue for ‘presentation’ or ‘reception’ of a representation at a specified site, consistent with the locality principle. It is also argued that domains of representation cannot be defined by signal traffic, since they can be expected to include ‘null’ elements based on non-firing cells. In this analysis, mental representations-as-origin are distributed patterns of cell firing. Each firing cell is given semantic value in its own right – some form of atomic propositional significance – since different axonal branches may contribute to integration with different populations of signals at different downstream sites. Representations-as-input are patterns of local co-arrival of signals in the form of synaptic potentials in dendrites. Meaning then draws on the relationships between active and null inputs, forming ‘scenarios’ comprising a molecular combination of ‘premises’ from which a new output with atomic propositional significance is generated. In both types of representation, meaning, interpretation or significance pivots on events in an individual cell. (This analysis only applies to ‘occurrent’ representations based on current neural activity.) The concept of representations-as-input emphasizes the need for an internal ‘consumer’ of a representation and the dependence of meaning on the co-relationships involved in an input interaction between signals and consumer. The acceptance of this necessity provides a basis for resolving the problem that representations appear both as distributed (representation-as-origin) and as local (representation-as-input). The key implications are that representations in the brain are massively multiple both in series and in parallel, and that individual cells play specific semantic roles. These roles are discussed in relation to traditional concepts of ‘gnostic’ cell types. PMID:27746760
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.
2015-03-01
Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.
Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.
Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan
2018-06-01
Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.
Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
99m Technetium-methylene diphosphonate ( 99m Tc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99m Tc-MDP-bone scan images. A set of 89 low contrast 99m Tc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t -test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful.
Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.
Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo
2017-10-01
Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.
Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-11-01
Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.
NASA Astrophysics Data System (ADS)
Glass, John O.; Reddick, Wilburn E.; Reeves, Cara; Pui, Ching-Hon
2004-05-01
Reliably quantifying therapy-induced leukoencephalopathy in children treated for cancer is a challenging task due to its varying MR properties and similarity to normal tissues and imaging artifacts. T1, T2, PD, and FLAIR images were analyzed for a subset of 15 children from an institutional protocol for the treatment of acute lymphoblastic leukemia. Three different analysis techniques were compared to examine improvements in the segmentation accuracy of leukoencephalopathy versus manual tracings by two expert observers. The first technique utilized no apriori information and a white matter mask based on the segmentation of the first serial examination of each patient. MR images were then segmented with a Kohonen Self-Organizing Map. The other two techniques combine apriori maps from the ICBM atlas spatially normalized to each patient and resliced using SPM99 software. The apriori maps were included as input and a gradient magnitude threshold calculated on the FLAIR images was also utilized. The second technique used a 2-dimensional threshold, while the third algorithm utilized a 3-dimensional threshold. Kappa values were compared for the three techniques to each observer, and improvements were seen with each addition to the original algorithm (Observer 1: 0.651, 0.653, 0.744; Observer 2: 0.603, 0.615, 0.699).
CCCT - NCTN Steering Committees - Clinical Imaging
The Clinical Imaging Steering Committee serves as a forum for the extramural imaging and oncology communities to provide strategic input to the NCI regarding its significant investment in imaging activities in clinical trials.
NASA Astrophysics Data System (ADS)
Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.
1999-05-01
PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.
Energy-Containing Length Scale at the Base of a Coronal Hole: New Observational Findings
NASA Astrophysics Data System (ADS)
Abramenko, V.; Dosch, A.; Zank, G. P.; Yurchyshyn, V.; Goode, P. R.
2012-12-01
Dynamics of the photospheric flux tubes is thought to be a key factor for generation and propagation of MHD waves and magnetic stress into the corona. Recently, New Solar Telescope (NST, Big Bear Solar Observatory) imaging observations in helium I 10830 Å revealed ultrafine, hot magnetic loops reaching from the photosphere to the corona and originating from intense, compact magnetic field elements. One of the essential input parameters to run the models of the fast solar wind is a characteristic energy-containing length scale, lambda, of the dynamical structures transverse to the mean magnetic field in a coronal hole (CH) in the base of the corona. We used NST time series of solar granulation motions to estimate the velocity fluctuations, as well as NST near-infrared magnetograms to derive the magnetic field fluctuations. The NST adaptive optics corrected speckle-reconstructed images of 10 seconds cadence were an input for the local correlation tracking (LCT) code to derive the squared transverse velocity patterns. We found that the characteristic length scale for the energy-carrying structures in the photosphere is about 300 km, which is two orders of magnitude lower than it was adopted in previous models. The influence of the result on the coronal heating and fast solar wind modeling will be discussed.; Correlation functions calculated from the squared velocities for the three data sets: a coronal hole, quiet sun and active region plage area.
Arevalillo-Herraez, Miguel; Cobos, Maximo; Garcia-Pineda, Miguel
2017-03-01
In this paper, we present an effective algorithm to reduce the number of wraps in a 2D phase signal provided as input. The technique is based on an accurate estimate of the fundamental frequency of a 2D complex signal with the phase given by the input, and the removal of a dependent additive term from the phase map. Unlike existing methods based on the discrete Fourier transform (DFT), the frequency is computed by using noise-robust estimates that are not restricted to integer values. Then, to deal with the problem of a non-integer shift in the frequency domain, an equivalent operation is carried out on the original phase signal. This consists of the subtraction of a tilted plane whose slope is computed from the frequency, followed by a re-wrapping operation. The technique has been exhaustively tested on fringe projection profilometry (FPP) and magnetic resonance imaging (MRI) signals. In addition, the performance of several frequency estimation methods has been compared. The proposed methodology is particularly effective on FPP signals, showing a higher performance than the state-of-the-art wrap reduction approaches. In this context, it contributes to canceling the carrier effect at the same time as it eliminates any potential slope that affects the entire signal. Its effectiveness on other carrier-free phase signals, e.g., MRI, is limited to the case that inherent slopes are present in the phase data.
The Engineer Topographic Laboratories /ETL/ hybrid optical/digital image processor
NASA Astrophysics Data System (ADS)
Benton, J. R.; Corbett, F.; Tuft, R.
1980-01-01
An optical-digital processor for generalized image enhancement and filtering is described. The optical subsystem is a two-PROM Fourier filter processor. Input imagery is isolated, scaled, and imaged onto the first PROM; this input plane acts like a liquid gate and serves as an incoherent-to-coherent converter. The image is transformed onto a second PROM which also serves as a filter medium; filters are written onto the second PROM with a laser scanner in real time. A solid state CCTV camera records the filtered image, which is then digitized and stored in a digital image processor. The operator can then manipulate the filtered image using the gray scale and color remapping capabilities of the video processor as well as the digital processing capabilities of the minicomputer.
Image segmentation via foreground and background semantic descriptors
NASA Astrophysics Data System (ADS)
Yuan, Ding; Qiang, Jingjing; Yin, Jihao
2017-09-01
In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.
Overview of deep learning in medical imaging.
Suzuki, Kenji
2017-09-01
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Automatic dynamic range adjustment for ultrasound B-mode imaging.
Lee, Yeonhwa; Kang, Jinbum; Yoo, Yangmo
2015-02-01
In medical ultrasound imaging, dynamic range (DR) is defined as the difference between the maximum and minimum values of the displayed signal to display and it is one of the most essential parameters that determine its image quality. Typically, DR is given with a fixed value and adjusted manually by operators, which leads to low clinical productivity and high user dependency. Furthermore, in 3D ultrasound imaging, DR values are unable to be adjusted during 3D data acquisition. A histogram matching method, which equalizes the histogram of an input image based on that from a reference image, can be applied to determine the DR value. However, it could be lead to an over contrasted image. In this paper, a new Automatic Dynamic Range Adjustment (ADRA) method is presented that adaptively adjusts the DR value by manipulating input images similar to a reference image. The proposed ADRA method uses the distance ratio between the log average and each extreme value of a reference image. To evaluate the performance of the ADRA method, the similarity between the reference and input images was measured by computing a correlation coefficient (CC). In in vivo experiments, the CC values were increased by applying the ADRA method from 0.6872 to 0.9870 and from 0.9274 to 0.9939 for kidney and liver data, respectively, compared to the fixed DR case. In addition, the proposed ADRA method showed to outperform the histogram matching method with in vivo liver and kidney data. When using 3D abdominal data with 70 frames, while the CC value from the ADRA method is slightly increased (i.e., 0.6%), the proposed method showed improved image quality in the c-plane compared to its fixed counterpart, which suffered from a shadow artifact. These results indicate that the proposed method can enhance image quality in 2D and 3D ultrasound B-mode imaging by improving the similarity between the reference and input images while eliminating unnecessary manual interaction by the user. Copyright © 2014 Elsevier B.V. All rights reserved.
Self-aligning and compressed autosophy video databases
NASA Astrophysics Data System (ADS)
Holtz, Klaus E.
1993-04-01
Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.
Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.
Resilience to the contralateral visual field bias as a window into object representations
Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.
2016-01-01
Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998
A novel background field removal method for MRI using projection onto dipole fields (PDF).
Liu, Tian; Khalidov, Ildar; de Rochefort, Ludovic; Spincemaille, Pascal; Liu, Jing; Tsiouris, A John; Wang, Yi
2011-11-01
For optimal image quality in susceptibility-weighted imaging and accurate quantification of susceptibility, it is necessary to isolate the local field generated by local magnetic sources (such as iron) from the background field that arises from imperfect shimming and variations in magnetic susceptibility of surrounding tissues (including air). Previous background removal techniques have limited effectiveness depending on the accuracy of model assumptions or information input. In this article, we report an observation that the magnetic field for a dipole outside a given region of interest (ROI) is approximately orthogonal to the magnetic field of a dipole inside the ROI. Accordingly, we propose a nonparametric background field removal technique based on projection onto dipole fields (PDF). In this PDF technique, the background field inside an ROI is decomposed into a field originating from dipoles outside the ROI using the projection theorem in Hilbert space. This novel PDF background removal technique was validated on a numerical simulation and a phantom experiment and was applied in human brain imaging, demonstrating substantial improvement in background field removal compared with the commonly used high-pass filtering method. Copyright © 2011 John Wiley & Sons, Ltd.
SEM Imaging and Chemical Analysis of Aerosol Particles from Surface and Hi-altitudes in New Jersey.
NASA Astrophysics Data System (ADS)
Bandamede, M.; Boaggio, K.; Bancroft, L.; Hurler, K.; Magee, N. B.
2016-12-01
We report on Scanning Electron Microscopy analysis of aerosol particle morphology and chemistry. The work includes the first comparative SEM analysis of aerosol particles captured by balloon at high altitude. The particles were acquired in an urban/suburban environment in central New-Jersey. Particles were sampled from near the surface using ambient air filtration and at high-altitudes using a novel balloon-borne instrument (ICE-Ball, see abstract by K. Boaggio). Particle images and 3D geometry are acquired by a Hitachi SU-5000 SEM, with resolution to approximately 3 nm. Elemental analysis on particles is provided by Energy Dispersive X-Ray Spectroscopy (EDS, EDAX, Inc.). Uncoated imaging is conducted in low vacuum within the variable-pressure SEM, which provides improved detection and analysis of light-element compositions including Carbon. Preliminary results suggest that some similar particle types and chemical species are sampled at both surface and high-altitude. However, as expected, particle morphologies, concentrations, chemistry, and apparent origin vary significantly at different altitudes and under different atmospheric flow regimes. Improved characterization of high-altitude aerosol particles, and differences from surface particulate composition, may advance inputs for atmospheric cloud and radiation models.
Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters
NASA Astrophysics Data System (ADS)
Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada
2011-01-01
We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.
Lee, Hansang; Hong, Helen; Kim, Junmo; Jung, Dae Chul
2018-04-01
To develop an automatic deep feature classification (DFC) method for distinguishing benign angiomyolipoma without visible fat (AMLwvf) from malignant clear cell renal cell carcinoma (ccRCC) from abdominal contrast-enhanced computer tomography (CE CT) images. A dataset including 80 abdominal CT images of 39 AMLwvf and 41 ccRCC patients was used. We proposed a DFC method for differentiating the small renal masses (SRM) into AMLwvf and ccRCC using the combination of hand-crafted and deep features, and machine learning classifiers. First, 71-dimensional hand-crafted features (HCF) of texture and shape were extracted from the SRM contours. Second, 1000-4000-dimensional deep features (DF) were extracted from the ImageNet pretrained deep learning model with the SRM image patches. In DF extraction, we proposed the texture image patches (TIP) to emphasize the texture information inside the mass in DFs and reduce the mass size variability. Finally, the two features were concatenated and the random forest (RF) classifier was trained on these concatenated features to classify the types of SRMs. The proposed method was tested on our dataset using leave-one-out cross-validation and evaluated using accuracy, sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV), and area under receiver operating characteristics curve (AUC). In experiments, the combinations of four deep learning models, AlexNet, VGGNet, GoogleNet, and ResNet, and four input image patches, including original, masked, mass-size, and texture image patches, were compared and analyzed. In qualitative evaluation, we observed the change in feature distributions between the proposed and comparative methods using tSNE method. In quantitative evaluation, we evaluated and compared the classification results, and observed that (a) the proposed HCF + DF outperformed HCF-only and DF-only, (b) AlexNet showed generally the best performances among the CNN models, and (c) the proposed TIPs not only achieved the competitive performances among the input patches, but also steady performance regardless of CNN models. As a result, the proposed method achieved the accuracy of 76.6 ± 1.4% for the proposed HCF + DF with AlexNet and TIPs, which improved the accuracy by 6.6%p and 8.3%p compared to HCF-only and DF-only, respectively. The proposed shape features and TIPs improved the HCFs and DFs, respectively, and the feature concatenation further enhanced the quality of features for differentiating AMLwvf from ccRCC in abdominal CE CT images. © 2018 American Association of Physicists in Medicine.
Omniview motionless camera orientation system
NASA Technical Reports Server (NTRS)
Martin, H. Lee (Inventor); Kuban, Daniel P. (Inventor); Zimmermann, Steven D. (Inventor); Busko, Nicholas (Inventor)
2010-01-01
An apparatus and method is provided for converting digital images for use in an imaging system. The apparatus includes a data memory which stores digital data representing an image having a circular or spherical field of view such as an image captured by a fish-eye lens, a control input for receiving a signal for selecting a portion of the image, and a converter responsive to the control input for converting digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. Various methods include the steps of storing digital data representing an image having a circular or spherical field of view, selecting a portion of the image, and converting the stored digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. In various embodiments, the data converter and data conversion step may use an orthogonal set of transformation algorithms.
Building accurate historic and future climate MEPDG input files for Louisiana DOTD.
DOT National Transportation Integrated Search
2017-02-01
The pavement design process (originally MEPDG, then DARWin-ME, and now Pavement ME Design) requires a multi-year set of hourly : climate input data that influence pavement material properties. In Louisiana, the software provides nine locations with c...
Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
Purpose of the Study: 99mTechnetium-methylene diphosphonate (99mTc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99mTc-MDP-bone scan images. Materials and Methods: A set of 89 low contrast 99mTc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. Results: This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t-test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. Conclusion: GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful. PMID:29142344
NASA Astrophysics Data System (ADS)
Yu, Le; Zhang, Dengrong; Holden, Eun-Jung
2008-07-01
Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.
PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method.
Haddadpour, Mozhdeh; Daneshvar, Sabalan; Seyedarabi, Hadi
2017-08-01
The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (D k ) as an assessing spectral features and Average Gradient (AG k ) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. In this paper we used three common evaluation metrics like Average Gradient (AG k ) and the lowest Discrepancy (D k ) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AG k ), Discrepancy (D k ) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images. Copyright © 2017 Chang Gung University. Published by Elsevier B.V. All rights reserved.
Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S
2008-01-01
Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.
Gómez-Nieto, Ricardo; Horta-Júnior, José de Anchieta C.; Castellano, Orlando; Millian-Morell, Lymarie; Rubio, Maria E.; López, Dolores E.
2014-01-01
The acoustic startle reflex (ASR) is a survival mechanism of alarm, which rapidly alerts the organism to a sudden loud auditory stimulus. In rats, the primary ASR circuit encompasses three serially connected structures: cochlear root neurons (CRNs), neurons in the caudal pontine reticular nucleus (PnC), and motoneurons in the medulla and spinal cord. It is well-established that both CRNs and PnC neurons receive short-latency auditory inputs to mediate the ASR. Here, we investigated the anatomical origin and functional role of these inputs using a multidisciplinary approach that combines morphological, electrophysiological and behavioral techniques. Anterograde tracer injections into the cochlea suggest that CRNs somata and dendrites receive inputs depending, respectively, on their basal or apical cochlear origin. Confocal colocalization experiments demonstrated that these cochlear inputs are immunopositive for the vesicular glutamate transporter 1 (VGLUT1). Using extracellular recordings in vivo followed by subsequent tracer injections, we investigated the response of PnC neurons after contra-, ipsi-, and bilateral acoustic stimulation and identified the source of their auditory afferents. Our results showed that the binaural firing rate of PnC neurons was higher than the monaural, exhibiting higher spike discharges with contralateral than ipsilateral acoustic stimulations. Our histological analysis confirmed the CRNs as the principal source of short-latency acoustic inputs, and indicated that other areas of the cochlear nucleus complex are not likely to innervate PnC. Behaviorally, we observed a strong reduction of ASR amplitude in monaural earplugged rats that corresponds with the binaural summation process shown in our electrophysiological findings. Our study contributes to understand better the role of neuronal mechanisms in auditory alerting behaviors and provides strong evidence that the CRNs-PnC pathway mediates fast neurotransmission and binaural summation of the ASR. PMID:25120419
Development of a precision multimodal surgical navigation system for lung robotic segmentectomy
Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe
2018-01-01
Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci SystemTM. Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making. PMID:29785294
Development of a precision multimodal surgical navigation system for lung robotic segmentectomy.
Baste, Jean Marc; Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe
2018-04-01
Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci System TM . Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making.
Harvesting geographic features from heterogeneous raster maps
NASA Astrophysics Data System (ADS)
Chiang, Yao-Yi
2010-11-01
Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial dataset. The road vectorization and text recognition results outperform state-of-art commercial products, and with considerably less user input. The approach in this thesis allows us to make use of the geospatial information of heterogeneous maps locked in raster format.
NASA Technical Reports Server (NTRS)
Shulman, A. R. (Inventor)
1971-01-01
A method and apparatus for substantially eliminating noise in a coherent energy imaging system, and specifically in a light imaging system of the type having a coherent light source and at least one image lens disposed between an input signal plane and an output image plane are, discussed. The input signal plane is illuminated with the light source by rotating the lens about its optical axis. In this manner, the energy density of coherent noise diffraction patterns as produced by imperfections such as dust and/or bubbles on and/or in the lens is distributed over a ring-shaped area of the output image plane and reduced to a point wherein it can be ignored. The spatial filtering capability of the coherent imaging system is not affected by this noise elimination technique.
Logarithmic profile mapping multi-scale Retinex for restoration of low illumination images
NASA Astrophysics Data System (ADS)
Shi, Haiyan; Kwok, Ngaiming; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Lin, Ching-Feng; Wong, Chin Yeow
2018-04-01
Images are valuable information sources for many scientific and engineering applications. However, images captured in poor illumination conditions would have a large portion of dark regions that could heavily degrade the image quality. In order to improve the quality of such images, a restoration algorithm is developed here that transforms the low input brightness to a higher value using a modified Multi-Scale Retinex approach. The algorithm is further improved by a entropy based weighting with the input and the processed results to refine the necessary amplification at regions of low brightness. Moreover, fine details in the image are preserved by applying the Retinex principles to extract and then re-insert object edges to obtain an enhanced image. Results from experiments using low and normal illumination images have shown satisfactory performances with regard to the improvement in information contents and the mitigation of viewing artifacts.
Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.
1993-01-01
The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.
On the Visual Input Driving Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean
1996-01-01
Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.
Color constancy using bright-neutral pixels
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Luo, Yupin
2014-03-01
An effective illuminant-estimation approach for color constancy is proposed. Bright and near-neutral pixels are selected to jointly represent the illuminant color and utilized for illuminant estimation. To assess the representing capability of pixels, bright-neutral strength (BNS) is proposed by combining pixel chroma and brightness. Accordingly, a certain percentage of pixels with the largest BNS is selected to be the representative set. For every input image, a proper percentage value is determined via an iterative strategy by seeking the optimal color-corrected image. To compare various color-corrected images of an input image, image color-cast degree (ICCD) is devised using means and standard deviations of RGB channels. Experimental evaluation on standard real-world datasets validates the effectiveness of the proposed approach.
MMX-I: data-processing software for multimodal X-ray imaging and tomography.
Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea
2016-05-01
A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.
Information encoder/decoder using chaotic systems
Miller, Samuel Lee; Miller, William Michael; McWhorter, Paul Jackson
1997-01-01
The present invention discloses a chaotic system-based information encoder and decoder that operates according to a relationship defining a chaotic system. Encoder input signals modify the dynamics of the chaotic system comprising the encoder. The modifications result in chaotic, encoder output signals that contain the encoder input signals encoded within them. The encoder output signals are then capable of secure transmissions using conventional transmission techniques. A decoder receives the encoder output signals (i.e., decoder input signals) and inverts the dynamics of the encoding system to directly reconstruct the original encoder input signals.
Information encoder/decoder using chaotic systems
Miller, S.L.; Miller, W.M.; McWhorter, P.J.
1997-10-21
The present invention discloses a chaotic system-based information encoder and decoder that operates according to a relationship defining a chaotic system. Encoder input signals modify the dynamics of the chaotic system comprising the encoder. The modifications result in chaotic, encoder output signals that contain the encoder input signals encoded within them. The encoder output signals are then capable of secure transmissions using conventional transmission techniques. A decoder receives the encoder output signals (i.e., decoder input signals) and inverts the dynamics of the encoding system to directly reconstruct the original encoder input signals. 32 figs.
Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.
Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung
2018-04-01
In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Staring 2-D hadamard transform spectral imager
Gentry, Stephen M [Albuquerque, NM; Wehlburg, Christine M [Albuquerque, NM; Wehlburg, Joseph C [Albuquerque, NM; Smith, Mark W [Albuquerque, NM; Smith, Jody L [Albuquerque, NM
2006-02-07
A staring imaging system inputs a 2D spatial image containing multi-frequency spectral information. This image is encoded in one dimension of the image with a cyclic Hadamarid S-matrix. The resulting image is detecting with a spatial 2D detector; and a computer applies a Hadamard transform to recover the encoded image.
A manual for microcomputer image analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rich, P.M.; Ranken, D.M.; George, J.S.
1989-12-01
This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less
A mathematical model of neuro-fuzzy approximation in image classification
NASA Astrophysics Data System (ADS)
Gopalan, Sasi; Pinto, Linu; Sheela, C.; Arun Kumar M., N.
2016-06-01
Image digitization and explosion of World Wide Web has made traditional search for image, an inefficient method for retrieval of required grassland image data from large database. For a given input query image Content-Based Image Retrieval (CBIR) system retrieves the similar images from a large database. Advances in technology has increased the use of grassland image data in diverse areas such has agriculture, art galleries, education, industry etc. In all the above mentioned diverse areas it is necessary to retrieve grassland image data efficiently from a large database to perform an assigned task and to make a suitable decision. A CBIR system based on grassland image properties and it uses the aid of a feed-forward back propagation neural network for an effective image retrieval is proposed in this paper. Fuzzy Memberships plays an important role in the input space of the proposed system which leads to a combined neural fuzzy approximation in image classification. The CBIR system with mathematical model in the proposed work gives more clarity about fuzzy-neuro approximation and the convergence of the image features in a grassland image.
UGS video target detection and discrimination
NASA Astrophysics Data System (ADS)
Roberts, G. Marlon; Fitzgerald, James; McCormack, Michael; Steadman, Robert; Vitale, Joseph D.
2007-04-01
This project focuses on developing electro-optic algorithms which rank images by their likelihood of containing vehicles and people. These algorithms have been applied to images obtained from Textron's Terrain Commander 2 (TC2) Unattended Ground Sensor system. The TC2 is a multi-sensor surveillance system used in military applications. It combines infrared, acoustic, seismic, magnetic, and electro-optic sensors to detect nearby targets. When targets are detected by the seismic and acoustic sensors, the system is triggered and images are taken in the visible and infrared spectrum. The original Terrain Commander system occasionally captured and transmitted an excessive number of images, sometimes triggered by undesirable targets such as swaying trees. This wasted communications bandwidth, increased power consumption, and resulted in a large amount of end-user time being spent evaluating unimportant images. The algorithms discussed here help alleviate these problems. These algorithms are currently optimized for infra-red images, which give the best visibility in a wide range of environments, but could be adapted to visible imagery as well. It is important that the algorithms be robust, with minimal dependency on user input. They should be effective when tracking varying numbers of targets of different sizes and orientations, despite the low resolutions of the images used. Most importantly, the algorithms must be appropriate for implementation on a low-power processor in real time. This would enable us to maintain frame rates of 2 Hz for effective surveillance operations. Throughout our project we have implemented several algorithms, and used an appropriate methodology to quantitatively compare their performance. They are discussed in this paper.
Deep multi-spectral ensemble learning for electronic cleansing in dual-energy CT colonography
NASA Astrophysics Data System (ADS)
Tachibana, Rie; Näppi, Janne J.; Hironaka, Toru; Kim, Se Hyung; Yoshida, Hiroyuki
2017-03-01
We developed a novel electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC) based on an ensemble deep convolution neural network (DCNN) and multi-spectral multi-slice image patches. In the method, an ensemble DCNN is used to classify each voxel of a DE-CTC image volume into five classes: luminal air, soft tissue, tagged fecal materials, and partial-volume boundaries between air and tagging and those between soft tissue and tagging. Each DCNN acts as a voxel classifier, where an input image patch centered at the voxel is generated as input to the DCNNs. An image patch has three channels that are mapped from a region-of-interest containing the image plane of the voxel and the two adjacent image planes. Six different types of spectral input image datasets were derived using two dual-energy CT images, two virtual monochromatic images, and two material images. An ensemble DCNN was constructed by use of a meta-classifier that combines the output of multiple DCNNs, each of which was trained with a different type of multi-spectral image patches. The electronically cleansed CTC images were calculated by removal of regions classified as other than soft tissue, followed by a colon surface reconstruction. For pilot evaluation, 359 volumes of interest (VOIs) representing sources of subtraction artifacts observed in current EC schemes were sampled from 30 clinical CTC cases. Preliminary results showed that the ensemble DCNN can yield high accuracy in labeling of the VOIs, indicating that deep learning of multi-spectral EC with multi-slice imaging could accurately remove residual fecal materials from CTC images without generating major EC artifacts.
Second Generation Integrated Composite Analyzer (ICAN) Computer Code
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Ginty, Carol A.; Sanfeliz, Jose G.
1993-01-01
This manual updates the original 1986 NASA TP-2515, Integrated Composite Analyzer (ICAN) Users and Programmers Manual. The various enhancements and newly added features are described to enable the user to prepare the appropriate input data to run this updated version of the ICAN code. For reference, the micromechanics equations are provided in an appendix and should be compared to those in the original manual for modifications. A complete output for a sample case is also provided in a separate appendix. The input to the code includes constituent material properties, factors reflecting the fabrication process, and laminate configuration. The code performs micromechanics, macromechanics, and laminate analyses, including the hygrothermal response of polymer-matrix-based fiber composites. The output includes the various ply and composite properties, the composite structural response, and the composite stress analysis results with details on failure. The code is written in FORTRAN 77 and can be used efficiently as a self-contained package (or as a module) in complex structural analysis programs. The input-output format has changed considerably from the original version of ICAN and is described extensively through the use of a sample problem.
Assessing the skeletal age from a hand radiograph: automating the Tanner-Whitehouse method
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; van Ginneken, Bram; Maas, Casper A.; Beek, Frederik J. A.; Viergever, Max A.
2003-05-01
The skeletal maturity of children is usually assessed from a standard radiograph of the left hand and wrist. An established clinical method to determine the skeletal maturity is the Tanner-Whitehouse (TW2) method. This method divides the skeletal development into several stages (labelled A, B, ...,I). We are developing an automated system based on this method. In this work we focus on assigning a stage to one region of interest (ROI), the middle phalanx of the third finger. We classify each ROI as follows. A number of ROIs which have been assigned a certain stage by a radiologist are used to construct a mean image for that stage. For a new input ROI, landmarks are detected by using an Active Shape Model. These are used to align the mean images with the input image. Subsequently the correlation between each transformed mean stage image and the input is calculated. The input ROI can be assigned to the stage with the highest correlation directly, or the values can be used as features in a classifier. The method was tested on 71 cases ranging from stage E to I. The ROI was staged correctly in 73.2% of all cases and in 97.2% of all incorrectly staged cases the error was not more than one stage.
Implementation of a Wavefront-Sensing Algorithm
NASA Technical Reports Server (NTRS)
Smith, Jeffrey S.; Dean, Bruce; Aronstein, David
2013-01-01
A computer program has been written as a unique implementation of an image-based wavefront-sensing algorithm reported in "Iterative-Transform Phase Retrieval Using Adaptive Diversity" (GSC-14879-1), NASA Tech Briefs, Vol. 31, No. 4 (April 2007), page 32. This software was originally intended for application to the James Webb Space Telescope, but is also applicable to other segmented-mirror telescopes. The software is capable of determining optical-wavefront information using, as input, a variable number of irradiance measurements collected in defocus planes about the best focal position. The software also uses input of the geometrical definition of the telescope exit pupil (otherwise denoted the pupil mask) to identify the locations of the segments of the primary telescope mirror. From the irradiance data and mask information, the software calculates an estimate of the optical wavefront (a measure of performance) of the telescope generally and across each primary mirror segment specifically. The software is capable of generating irradiance data, wavefront estimates, and basis functions for the full telescope and for each primary-mirror segment. Optionally, each of these pieces of information can be measured or computed outside of the software and incorporated during execution of the software.
Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun
2014-01-01
Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
Clinical utility of wavelet compression for resolution-enhanced chest radiography
NASA Astrophysics Data System (ADS)
Andriole, Katherine P.; Hovanes, Michael E.; Rowberg, Alan H.
2000-05-01
This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.
NASA Astrophysics Data System (ADS)
Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.
2009-04-01
θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.
Arterial input function derived from pairwise correlations between PET-image voxels.
Schain, Martin; Benjaminsson, Simon; Varnäs, Katarina; Forsberg, Anton; Halldin, Christer; Lansner, Anders; Farde, Lars; Varrone, Andrea
2013-07-01
A metabolite corrected arterial input function is a prerequisite for quantification of positron emission tomography (PET) data by compartmental analysis. This quantitative approach is also necessary for radioligands without suitable reference regions in brain. The measurement is laborious and requires cannulation of a peripheral artery, a procedure that can be associated with patient discomfort and potential adverse events. A non invasive procedure for obtaining the arterial input function is thus preferable. In this study, we present a novel method to obtain image-derived input functions (IDIFs). The method is based on calculation of the Pearson correlation coefficient between the time-activity curves of voxel pairs in the PET image to localize voxels displaying blood-like behavior. The method was evaluated using data obtained in human studies with the radioligands [(11)C]flumazenil and [(11)C]AZ10419369, and its performance was compared with three previously published methods. The distribution volumes (VT) obtained using IDIFs were compared with those obtained using traditional arterial measurements. Overall, the agreement in VT was good (∼3% difference) for input functions obtained using the pairwise correlation approach. This approach performed similarly or even better than the other methods, and could be considered in applied clinical studies. Applications to other radioligands are needed for further verification.
Image processing and recognition for biological images
Uchida, Seiichi
2013-01-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739
Auto and hetero-associative memory using a 2-D optical logic gate
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor)
1992-01-01
An optical system for auto-associative and hetero-associative recall utilizing Hamming distance as the similarity measure between a binary input image vector V(sup k) and a binary image vector V(sup m) in a first memory array using an optical Exclusive-OR gate for multiplication of each of a plurality of different binary image vectors in memory by the input image vector. After integrating the light of each product V(sup k) x V(sup m), a shortest Hamming distance detection electronics module determines which product has the lowest light intensity and emits a signal that activates a light emitting diode to illuminate a corresponding image vector in a second memory array for display. That corresponding image vector is identical to the memory image vector V(sup m) in the first memory array for auto-associative recall or related to it, such as by name, for hetero-associative recall.
Development of a novel 2D color map for interactive segmentation of histological images.
Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D
2012-05-01
We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.
39 CFR 3010.12 - Contents of notice of rate adjustment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... information must be supported by workpapers in which all calculations are shown and all input values, including all relevant CPI-U values, are listed with citations to the original sources. (2) A schedule... input values, including current rates, new rates, and billing determinants, are listed with citations to...
3D shape recovery from image focus using gray level co-occurrence matrix
NASA Astrophysics Data System (ADS)
Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid
2018-04-01
Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.
Integrated editing system for Japanese text and image information "Linernote"
NASA Astrophysics Data System (ADS)
Tanaka, Kazuto
Integrated Japanese text editing system "Linernote" developed by Toyo Industries Co. is explained. The system has been developed on the concept of electronic publishing. It is composed of personal computer NEC PC-9801 VX and other peripherals. Sentence, drawing and image data is inputted and edited under the integrated operating environment in the system and final text is printed out by laser printer. Handling efficiency of time consuming work such as pattern input or page make up has been improved by draft image data indication method on CRT. It is the latest DTP system equipped with three major functions, namly, typesetting for high quality text editing, easy drawing/tracing and high speed image processing.
Weber, A J; Stanford, L R
1994-05-15
It has long been known that a number of functionally different types of ganglion cells exist in the cat retina, and that each responds differently to visual stimulation. To determine whether the characteristic response properties of different retinal ganglion cell types might reflect differences in the number and distribution of their bipolar and amacrine cell inputs, we compared the percentages and distributions of the synaptic inputs from bipolar and amacrine cells to the entire dendritic arbors of physiologically characterized retinal X- and Y-cells. Sixty-two percent of the synaptic input to the Y-cell was from amacrine cell terminals, while the X-cells received approximately equal amounts of input from amacrine and bipolar cells. We found no significant difference in the distributions of bipolar or amacrine cell inputs to X- and Y-cells, or ON-center and OFF-center cells, either as a function of dendritic branch order or distance from the origin of the dendritic arbor. While, on the basis of these data, we cannot exclude the possibility that the difference in the proportion of bipolar and amacrine cell input contributes to the functional differences between X- and Y-cells, the magnitude of this difference, and the similarity in the distributions of the input from the two afferent cell types, suggest that mechanisms other than a simple predominance of input from amacrine or bipolar cells underlie the differences in their response properties. More likely, perhaps, is that the specific response features of X- and Y-cells originate in differences in the visual responses of the bipolar and amacrine cells that provide their input, or in the complex synaptic arrangements found among amacrine and bipolar cell terminals and the dendrites of specific types of retinal ganglion cells.
Lamey, M; Carlone, M; Alasti, H; Bissonnette, J P; Borg, J; Breen, S; Coolens, C; Heaton, R; Islam, M; van Proojen, M; Sharpe, M; Stanescu, T; Jaffray, D
2012-07-01
An online Magnetic Resonance guided Radiation Therapy (MRgRT) system is under development. The system is comprised of an MRI with the capability of travel between and into HDR brachytherapy and external beam radiation therapy vaults. The system will provide on-line MR images immediately prior to radiation therapy. The MR images will be registered to a planning image and used for image guidance. With the intention of system safety we have performed a failure modes and effects analysis. A process tree of the facility function was developed. Using the process tree as well as an initial design of the facility as guidelines possible failure modes were identified, for each of these failure modes root causes were identified. For each possible failure the assignment of severity, detectability and occurrence scores was performed. Finally suggestions were developed to reduce the possibility of an event. The process tree consists of nine main inputs and each of these main inputs consisted of 5 - 10 sub inputs and tertiary inputs were also defined. The process tree ensures that the overall safety of the system has been considered. Several possible failure modes were identified and were relevant to the design, construction, commissioning and operating phases of the facility. The utility of the analysis can be seen in that it has spawned projects prior to installation and has lead to suggestions in the design of the facility. © 2012 American Association of Physicists in Medicine.
Kunori, Nobuo; Takashima, Ichiro
2016-12-01
The motor cortex of rats contains two forelimb motor areas; the caudal forelimb area (CFA) and the rostral forelimb area (RFA). Although the RFA is thought to correspond to the premotor and/or supplementary motor cortices of primates, which are higher-order motor areas that receive somatosensory inputs, it is unknown whether the RFA of rats receives somatosensory inputs in the same manner. To investigate this issue, voltage-sensitive dye (VSD) imaging was used to assess the motor cortex in rats following a brief electrical stimulation of the forelimb. This procedure was followed by intracortical microstimulation (ICMS) mapping to identify the motor representations in the imaged cortex. The combined use of VSD imaging and ICMS revealed that both the CFA and RFA received excitatory synaptic inputs after forelimb stimulation. Further evaluation of the sensory input pathway to the RFA revealed that the forelimb-evoked RFA response was abolished either by the pharmacological inactivation of the CFA or a cortical transection between the CFA and RFA. These results suggest that forelimb-related sensory inputs would be transmitted to the RFA from the CFA via the cortico-cortical pathway. Thus, the present findings imply that sensory information processed in the RFA may be used for the generation of coordinated forelimb movements, which would be similar to the function of the higher-order motor cortex in primates. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Image segmentation algorithm based on improved PCNN
NASA Astrophysics Data System (ADS)
Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui
2017-11-01
A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.
Multiclassifier fusion in human brain MR segmentation: modelling convergence.
Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander
2006-01-01
Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.
MMX-I: data-processing software for multimodal X-ray imaging and tomography
Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea
2016-01-01
A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159
Diffraction-Induced Bidimensional Talbot Self-Imaging with Full Independent Period Control
NASA Astrophysics Data System (ADS)
Guillet de Chatellus, Hugues; Romero Cortés, Luis; Deville, Antonin; Seghilani, Mohamed; Azaña, José
2017-03-01
We predict, formulate, and observe experimentally a generalized version of the Talbot effect that allows one to create diffraction-induced self-images of a periodic two-dimensional (2D) waveform with arbitrary control of the image spatial periods. Through the proposed scheme, the periods of the output self-image are multiples of the input ones by any desired integer or fractional factor, and they can be controlled independently across each of the two wave dimensions. The concept involves conditioning the phase profile of the input periodic wave before free-space diffraction. The wave energy is fundamentally preserved through the self-imaging process, enabling, for instance, the possibility of the passive amplification of the periodic patterns in the wave by a purely diffractive effect, without the use of any active gain.
Diffraction-Induced Bidimensional Talbot Self-Imaging with Full Independent Period Control.
Guillet de Chatellus, Hugues; Romero Cortés, Luis; Deville, Antonin; Seghilani, Mohamed; Azaña, José
2017-03-31
We predict, formulate, and observe experimentally a generalized version of the Talbot effect that allows one to create diffraction-induced self-images of a periodic two-dimensional (2D) waveform with arbitrary control of the image spatial periods. Through the proposed scheme, the periods of the output self-image are multiples of the input ones by any desired integer or fractional factor, and they can be controlled independently across each of the two wave dimensions. The concept involves conditioning the phase profile of the input periodic wave before free-space diffraction. The wave energy is fundamentally preserved through the self-imaging process, enabling, for instance, the possibility of the passive amplification of the periodic patterns in the wave by a purely diffractive effect, without the use of any active gain.
NASA Astrophysics Data System (ADS)
Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy
2017-03-01
In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.
Perception of 3-D location based on vision, touch, and extended touch
Giudice, Nicholas A.; Klatzky, Roberta L.; Bennett, Christopher R.; Loomis, Jack M.
2012-01-01
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated.This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate. PMID:23070234
3D Clumped Cell Segmentation Using Curvature Based Seeded Watershed.
Atta-Fosu, Thomas; Guo, Weihong; Jeter, Dana; Mizutani, Claudia M; Stopczynski, Nathan; Sousa-Neves, Rui
2016-12-01
Image segmentation is an important process that separates objects from the background and also from each other. Applied to cells, the results can be used for cell counting which is very important in medical diagnosis and treatment, and biological research that is often used by scientists and medical practitioners. Segmenting 3D confocal microscopy images containing cells of different shapes and sizes is still challenging as the nuclei are closely packed. The watershed transform provides an efficient tool in segmenting such nuclei provided a reasonable set of markers can be found in the image. In the presence of low-contrast variation or excessive noise in the given image, the watershed transform leads to over-segmentation (a single object is overly split into multiple objects). The traditional watershed uses the local minima of the input image and will characteristically find multiple minima in one object unless they are specified (marker-controlled watershed). An alternative to using the local minima is by a supervised technique called seeded watershed, which supplies single seeds to replace the minima for the objects. Consequently, the accuracy of a seeded watershed algorithm relies on the accuracy of the predefined seeds. In this paper, we present a segmentation approach based on the geometric morphological properties of the 'landscape' using curvatures. The curvatures are computed as the eigenvalues of the Shape matrix, producing accurate seeds that also inherit the original shape of their respective cells. We compare with some popular approaches and show the advantage of the proposed method.
Autofocus algorithm for curvilinear SAR imaging
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2012-05-01
We describe an approach to autofocusing for large apertures on curved SAR trajectories. It is a phase-gradient type method in which phase corrections compensating trajectory perturbations are estimated not directly from the image itself, but rather on the basis of partial" SAR data { functions of the slow and fast times { recon- structed (by an appropriate forward-projection procedure) from windowed scene patches, of sizes comparable to distances between distinct targets or localized features of the scene. The resulting partial data" can be shown to contain the same information on the phase perturbations as that in the original data, provided the frequencies of the perturbations do not exceed a quantity proportional to the patch size. The algorithm uses as input a sequence of conventional scene images based on moderate-size subapertures constituting the full aperture for which the phase corrections are to be determined. The subaperture images are formed with pixel sizes comparable to the range resolution which, for the optimal subaperture size, should be also approximately equal the cross-range resolution. The method does not restrict the size or shape of the synthetic aperture and can be incorporated in the data collection process in persistent sensing scenarios. The algorithm has been tested on the publicly available set of GOTCHA data, intentionally corrupted by random-walk-type trajectory uctuations (a possible model of errors caused by imprecise inertial navigation system readings) of maximum frequencies compatible with the selected patch size. It was able to eciently remove image corruption for apertures of sizes up to 360 degrees.
Deformable Image Registration based on Similarity-Steered CNN Regression.
Cao, Xiaohuan; Yang, Jianhua; Zhang, Jun; Nie, Dong; Kim, Min-Jeong; Wang, Qian; Shen, Dinggang
2017-09-01
Existing deformable registration methods require exhaustively iterative optimization, along with careful parameter tuning, to estimate the deformation field between images. Although some learning-based methods have been proposed for initiating deformation estimation, they are often template-specific and not flexible in practical use. In this paper, we propose a convolutional neural network (CNN) based regression model to directly learn the complex mapping from the input image pair (i.e., a pair of template and subject) to their corresponding deformation field. Specifically, our CNN architecture is designed in a patch-based manner to learn the complex mapping from the input patch pairs to their respective deformation field. First, the equalized active-points guided sampling strategy is introduced to facilitate accurate CNN model learning upon a limited image dataset. Then, the similarity-steered CNN architecture is designed, where we propose to add the auxiliary contextual cue, i.e., the similarity between input patches, to more directly guide the learning process. Experiments on different brain image datasets demonstrate promising registration performance based on our CNN model. Furthermore, it is found that the trained CNN model from one dataset can be successfully transferred to another dataset, although brain appearances across datasets are quite variable.
A neural network approach to lung nodule segmentation
NASA Astrophysics Data System (ADS)
Hu, Yaoxiu; Menon, Prahlad G.
2016-03-01
Computed tomography (CT) imaging is a sensitive and specific lung cancer screening tool for the high-risk population and shown to be promising for detection of lung cancer. This study proposes an automatic methodology for detecting and segmenting lung nodules from CT images. The proposed methods begin with thorax segmentation, lung extraction and reconstruction of the original shape of the parenchyma using morphology operations. Next, a multi-scale hessian-based vesselness filter is applied to extract lung vasculature in lung. The lung vasculature mask is subtracted from the lung region segmentation mask to extract 3D regions representing candidate pulmonary nodules. Finally, the remaining structures are classified as nodules through shape and intensity features which are together used to train an artificial neural network. Up to 75% sensitivity and 98% specificity was achieved for detection of lung nodules in our testing dataset, with an overall accuracy of 97.62%+/-0.72% using 11 selected features as input to the neural network classifier, based on 4-fold cross-validation studies. Receiver operator characteristics for identifying nodules revealed an area under curve of 0.9476.
Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation
Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang
2015-01-01
The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829
The Origin of DIRT (Detrital Input and Removal Treatments): the Legacy of Dr. Francis D. Hole
NASA Astrophysics Data System (ADS)
Townsend, K. L.; Lajtha, K.; Caldwell, B.; Sollins, P.
2007-12-01
Soil organic matter (SOM) plays a key role in the cycling and retention of nitrogen and carbon within soil. Both above and belowground detrital inputs determine the nature and quantity of SOM. Studies on detrital impacts on SOM dynamics are underway at several LTER, ILTER and LTER-affiliated sites using a common experimental design, Detrital Input and Removal Treatments (DIRT). The concept for DIRT was originally based on experimental plots established at the University of Wisconsin Arboretum by Dr. Francis D. Hole in 1956 to study the effects of detrital inputs on pedogenesis. These plots are located on two forested sites and two prairie sites within the arboretum. Manipulations of the forested sites include double litter, no litter and removal of the O and A horizons. Manipulations of the prairie sites include harvest, mulch, bare and burn. These original treatments have largely been maintained since 1956. After 40 years of maintenance, there were significant differences in soil carbon between the double and no litter plots. The double litter plots had increased by nearly 30% while the no litter plots had decreased over 50%. The original DIRT plots are now 50 years old and have been re-sampled, where possible, for total carbon and nitrogen, labile and recalcitrant carbon fractions, net and gross nitrogen mineralization rates, and SOM bioavailability through CO2 respiration. The soils were fractionated by density to examine the role of carbon in each density fraction. The mean age of carbon in each fraction was determined by radiocarbon dating. This sampling and analysis is of special significance because it provides a glimpse into the future SOM trajectories for the new DIRT sites: Harvard Forest (MA), Bousson (PA), Andrews Experimental Forest (OR) and Sikfokut (Hungary).
Atlas-based automatic measurements of the morphology of the tibiofemoral joint
NASA Astrophysics Data System (ADS)
Brehler, M.; Thawait, G.; Shyr, W.; Ramsay, J.; Siewerdsen, J. H.; Zbijewski, W.
2017-03-01
Purpose: Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce userdependence of the metrics arising from manual identification of the anatomical landmarks. Methods: The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Results: Intra-reader variability as high as 10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. Conclusions: The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.
Atlas-based automatic measurements of the morphology of the tibiofemoral joint.
Brehler, M; Thawait, G; Shyr, W; Ramsay, J; Siewerdsen, J H; Zbijewski, W
2017-02-11
Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce user-dependence of the metrics arising from manual identification of the anatomical landmarks. The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Intra-reader variability as high as ~10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.
A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.
1989-01-01
Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Tao; Tsui, Benjamin M. W.; Li, Xin
Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less
Computerized tomography using video recorded fluoroscopic images
NASA Technical Reports Server (NTRS)
Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.
1975-01-01
A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.
Preliminary Data Pipeline for SunRISE: Assessing the Performance of Space Based Radio Arrays
NASA Astrophysics Data System (ADS)
Hegedus, A. M.; Kasper, J. C.; Lazio, J.; Amiri, N.; Stuart, J.
2017-12-01
The Sun Radio Interferometer Space Experiment (SunRISE) is a NASA Heliophysics Explorer Mission of Opportunity that was recently awarded phase A funding. SunRISE's main science goals are to localize the source of particle acceleration in coronal mass ejections to 1/4th of their width, and trace the path of electron beams along magnetic field lines out to 20 solar radii. These processes generate cascading Type II and III bursts that have ever only been detected in low frequencies with single spacecraft antenna. These bursts emit below the ionospheric cutoff of 10 MHz past 2 solar radii, so a synthetic aperture made from multiple space antennae is needed to pinpoint the origin of these bursts. In this work, we create an end to end simulation of the data processing pipeline of SunRISE, which uses 6 small satellites to do this localization. One of the main inputs of the simulation is a ground truth of what we want the array to image. We idealized this as an elliptical Gaussian offset from the sun, which previous modeling suggests is a good approximation of what SunRISE would see in space. Another input is an orbit file describing the positions of all the spacecraft. The simulated orbit determinations are made with GPS sidelobes and have an error associated with the recovered positions. From there we compute the Fourier coefficients every antenna will see, then apply the correct phase lags and multiply each pair of coefficients to simulate the process of correlation. We compute the projected UVW coordinates and put these along with the correlated visibilities into a CASA MS file. The correlated visibilities are compared to CASA's simulated visibilities at the same UVW coordinates, verifying the accuracy of our method. The visibilities are then subjected to realistic thermal noise, as well as phase noise from uncertainties in the spacecraft position. We employ CASA's CLEAN algorithm to image the data, and CASA's imfit algorithm to estimate the parameters of the imaged elliptical Gaussian, which we can compare directly to the input. We find that at the upper frequencies the phase noise can negatively affect performance of the array, but for the large majority of the tracking range of interest, SunRISE can sufficiently resolve the radio bursts to fulfill its science requirements and constrain Solar Energetic Particle acceleration and transport.
Renal artery origins: best angiographic projection angles.
Verschuyl, E J; Kaatee, R; Beek, F J; Patel, N H; Fontaine, A B; Daly, C P; Coldwell, D M; Bush, W H; Mali, W P
1997-10-01
To determine the best projection angles for imaging the renal artery origins in profile. A mathematical model of the anatomy at the renal artery origins in the transverse plane was used to analyze the amount of aortic lumen that projects over the renal artery origins at various projection angles. Computed tomographic (CT) angiographic data about the location of 400 renal artery origins in 200 patients were statistically analyzed. In patients with an abdominal aortic diameter no larger than 3.0 cm, approximately 0.5 mm of the proximal part of the renal artery and origin may be hidden from view if there is a projection error of +/-10 degrees from the ideal image. A combination of anteroposterior and 20 degrees and 40 degrees left anterior oblique projections resulted in a 92% yield of images that adequately profiled the renal artery origins. Right anterior oblique projections resulted in the least useful images. An error in projection angle of +/-10 degrees is acceptable for angiographic imaging of the renal artery origins. Patients sex, site of interest (left or right artery), and local diameter of the abdominal aorta are important factors to consider.
Parallel Information Processing (Image Transmission Via Fiber Bundle and Multimode Fiber
NASA Technical Reports Server (NTRS)
Kukhtarev, Nicholai
2003-01-01
Growing demand for visual, user-friendly representation of information inspires search for the new methods of image transmission. Currently used in-series (sequential) methods of information processing are inherently slow and are designed mainly for transmission of one or two dimensional arrays of data. Conventional transmission of data by fibers requires many fibers with array of laser diodes and photodetectors. In practice, fiber bundles are also used for transmission of images. Image is formed on the fiber-optic bundle entrance surface and each fiber transmits the incident image to the exit surface. Since the fibers do not preserve phase, only 2D intensity distribution can be transmitted in this way. Each single mode fiber transmit only one pixel of an image. Multimode fibers may be also used, so that each mode represent different pixel element. Direct transmission of image through multimode fiber is hindered by the mode scrambling and phase randomization. To overcome these obstacles wavelength and time-division multiplexing have been used, with each pixel transmitted on a separate wavelength or time interval. Phase-conjugate techniques also was tested in, but only in the unpractical scheme when reconstructed image return back to the fiber input end. Another method of three-dimensional imaging over single mode fibers was demonstrated in, using laser light of reduced spatial coherence. Coherence encoding, needed for a transmission of images by this methods, was realized with grating interferometer or with the help of an acousto-optic deflector. We suggest simple practical holographic method of image transmission over single multimode fiber or over fiber bundle with coherent light using filtering by holographic optical elements. Originally this method was successfully tested for the single multimode fiber. In this research we have modified holographic method for transmission of laser illuminated images over commercially available fiber bundle (fiber endoscope, or fiberscope).
Real-time feedback for spatiotemporal field stabilization in MR systems.
Duerst, Yolanda; Wilm, Bertram J; Dietrich, Benjamin E; Vannesjo, S Johanna; Barmet, Christoph; Schmid, Thomas; Brunner, David O; Pruessmann, Klaas P
2015-02-01
MR imaging and spectroscopy require a highly stable, uniform background field. The field stability is typically limited by hardware imperfections, external perturbations, or field fluctuations of physiological origin. The purpose of the present work is to address these issues by introducing spatiotemporal field stabilization based on real-time sensing and feedback control. An array of NMR field probes is used to sense the field evolution in a whole-body MR system concurrently with regular system operation. The field observations serve as inputs to a proportional-integral controller that governs correction currents in gradient and higher-order shim coils such as to keep the field stable in a volume of interest. The feedback system was successfully set up, currently reaching a minimum latency of 20 ms. Its utility is first demonstrated by countering thermal field drift during an EPI protocol. It is then used to address respiratory field fluctuations in a T2 *-weighted brain exam, resulting in substantially improved image quality. Feedback field control is an effective means of eliminating dynamic field distortions in MR systems. Third-order spatial control at an update time of 100 ms has proven sufficient to largely eliminate thermal and breathing effects in brain imaging at 7 Tesla. © 2014 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Appleton, P. N.; Siqueira, P. R.; Basart, J. P.
1993-01-01
The presence of diffuse extended IR emission from the Galaxy in the form of the so called 'Galactic Cirrus' emission has hampered the exploration of the extragalactic sky at long IR wavelengths. We describe the development of a filter based on mathematical morphology which appears to be a promising approach to the problem of cirrus removal. The method of Greyscale Morphology was applied to a 100 micron IRAS image of the M81 group of galaxies. This is an extragalactic field which suffers from serious contamination from foreground Galactic 'cirrus'. Using a technique called 'sieving', it was found that the cirrus emission has a characteristic behavior which can be quantified in terms of an average spatial structure spectrum or growth function. This function was then used to attempt to remove 'cirrus' from the entire image. The result was a significant reduction of cirrus emission by an intensity factor of 15 compared with the original input image. The method appears to preserve extended emission in the spatially extended IR disks of M81 and M82 as well as distinguishing fainter galaxies within bright regions of galactic cirrus. The techniques may also be applicable to IR databases obtained with the Cosmic Background Explorer.
Baca, A
1996-04-01
A method has been developed for the precise determination of anthropometric dimensions from the video images of four different body configurations. High precision is achieved by incorporating techniques for finding the location of object boundaries with sub-pixel accuracy, the implementation of calibration algorithms, and by taking into account the varying distances of the body segments from the recording camera. The system allows automatic segment boundary identification from the video image, if the boundaries are marked on the subject by black ribbons. In connection with the mathematical finite-mass-element segment model of Hatze, body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers etc.) can be computed by using the anthropometric data determined videometrically as input data. Compared to other, recently published video-based systems for the estimation of the inertial properties of body segments, the present algorithms reduce errors originating from optical distortions, inaccurate edge-detection procedures, and user-specified upper and lower segment boundaries or threshold levels for the edge-detection. The video-based estimation of human body segment parameters is especially useful in situations where ease of application and rapid availability of comparatively precise parameter values are of importance.
Image processing and recognition for biological images.
Uchida, Seiichi
2013-05-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.
Image classification at low light levels
NASA Astrophysics Data System (ADS)
Wernick, Miles N.; Morris, G. Michael
1986-12-01
An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.
Encrypting Digital Camera with Automatic Encryption Key Deletion
NASA Technical Reports Server (NTRS)
Oakley, Ernest C. (Inventor)
2007-01-01
A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.
Revolutions in energy input and material cycling in Earth history and human history
NASA Astrophysics Data System (ADS)
Lenton, Timothy M.; Pichler, Peter-Paul; Weisz, Helga
2016-04-01
Major revolutions in energy capture have occurred in both Earth and human history, with each transition resulting in higher energy input, altered material cycles and major consequences for the internal organization of the respective systems. In Earth history, we identify the origin of anoxygenic photosynthesis, the origin of oxygenic photosynthesis, and land colonization by eukaryotic photosynthesizers as step changes in free energy input to the biosphere. In human history we focus on the Palaeolithic use of fire, the Neolithic revolution to farming, and the Industrial revolution as step changes in free energy input to human societies. In each case we try to quantify the resulting increase in energy input, and discuss the consequences for material cycling and for biological and social organization. For most of human history, energy use by humans was but a tiny fraction of the overall energy input to the biosphere, as would be expected for any heterotrophic species. However, the industrial revolution gave humans the capacity to push energy inputs towards planetary scales and by the end of the 20th century human energy use had reached a magnitude comparable to the biosphere. By distinguishing world regions and income brackets we show the unequal distribution in energy and material use among contemporary humans. Looking ahead, a prospective sustainability revolution will require scaling up new renewable and decarbonized energy technologies and the development of much more efficient material recycling systems - thus creating a more autotrophic social metabolism. Such a transition must also anticipate a level of social organization that can implement the changes in energy input and material cycling without losing the large achievements in standard of living and individual liberation associated with industrial societies.
NEURAL NETWORK INTERACTIONS AND INGESTIVE BEHAVIOR CONTROL DURING ANOREXIA
Watts, Alan G.; Salter, Dawna S.; Neuner, Christina M.
2007-01-01
Many models have been proposed over the years to explain how motivated feeding behavior is controlled. One of the most compelling is based on the original concepts of Eliot Stellar whereby sets of interosensory and exterosensory inputs converge on a hypothalamic control network that can either stimulate or inhibit feeding. These inputs arise from information originating in the blood, the viscera, and the telencephalon. In this manner the relative strengths of the hypothalamic stimulatory and inhibitory networks at a particular time dictates how an animal feeds. Anorexia occurs when the balance within the networks consistently favors the restraint of feeding. This article discusses experimental evidence supporting a model whereby the increases in plasma osmolality that result from drinking hypertonic saline activate pathways projecting to neurons in the paraventricular nucleus of the hypothalamus (PVH) and lateral hypothalamic area (LHA). These neurons constitute the hypothalamic controller for ingestive behavior, and receive a set of afferent inputs from regions of the brain that process sensory information that is critical for different aspects of feeding. Important sets of inputs arise in the arcuate nucleus, the hindbrain, and in the telencephalon. Anorexia is generated in dehydrated animals by way of osmosensitive projections to the behavior control neurons in the PVH and LHA, rather than by actions on their afferent inputs. PMID:17531275
Cullings, H M; Grant, E J; Egbert, S D; Watanabe, T; Oda, T; Nakamura, F; Yamashita, T; Fuchi, H; Funamoto, S; Marumo, K; Sakata, R; Kodama, Y; Ozasa, K; Kodama, K
2017-01-01
Individual dose estimates calculated by Dosimetry System 2002 (DS02) for the Life Span Study (LSS) of atomic bomb survivors are based on input data that specify location and shielding at the time of the bombing (ATB). A multi-year effort to improve information on survivors' locations ATB has recently been completed, along with comprehensive improvements in their terrain shielding input data and several improvements to computational algorithms used in combination with DS02 at RERF. Improvements began with a thorough review and prioritization of original questionnaire data on location and shielding that were taken from survivors or their proxies in the period 1949-1963. Related source documents varied in level of detail, from relatively simple lists to carefully-constructed technical drawings of structural and other shielding and surrounding neighborhoods. Systematic errors were reduced in this work by restoring the original precision of map coordinates that had been truncated due to limitations in early data processing equipment and by correcting distortions in the old (WWII-era) maps originally used to specify survivors' positions, among other improvements. Distortion errors were corrected by aligning the old maps and neighborhood drawings to orthophotographic mosaics of the cities that were newly constructed from pre-bombing aerial photographs. Random errors that were reduced included simple transcription errors and mistakes in identifying survivors' locations on the old maps. Terrain shielding input data that had been originally estimated for limited groups of survivors using older methods and data sources were completely re-estimated for all survivors using new digital terrain elevation data. Improvements to algorithms included a fix to an error in the DS02 code for coupling house and terrain shielding, a correction for elevation at the survivor's location in calculating angles to the horizon used for terrain shielding input, an improved method for truncating high dose estimates to 4 Gy to reduce the effect of dose error, and improved methods for calculating averaged shielding transmission factors that are used to calculate doses for survivors without detailed shielding input data. Input data changes are summarized and described here in some detail, along with the resulting changes in dose estimates and a simple description of changes in risk estimates for solid cancer mortality. This and future RERF publications will refer to the new dose estimates described herein as "DS02R1 doses."
Unique processing during a period of high excitation/inhibition balance in adult-born neurons.
Marín-Burgin, Antonia; Mongiat, Lucas A; Pardi, M Belén; Schinder, Alejandro F
2012-03-09
The adult dentate gyrus generates new granule cells (GCs) that develop over several weeks and integrate into the preexisting network. Although adult hippocampal neurogenesis has been implicated in learning and memory, the specific role of new GCs remains unclear. We examined whether immature adult-born neurons contribute to information encoding. By combining calcium imaging and electrophysiology in acute slices, we found that weak afferent activity recruits few mature GCs while activating a substantial proportion of the immature neurons. These different activation thresholds are dictated by an enhanced excitation/inhibition balance transiently expressed in immature GCs. Immature GCs exhibit low input specificity that switches with time toward a highly specific responsiveness. Therefore, activity patterns entering the dentate gyrus can undergo differential decoding by a heterogeneous population of GCs originated at different times.
Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.
Aparin, Vladimir
2012-03-01
This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.
Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.
NASA Astrophysics Data System (ADS)
Sasaki, Hironori
This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically investigated using the photorefractive dynamics described in previous chapters of the dissertation. The implementation of the feed-forward image compression network with 900 input and 9 output neurons with 6-bit interconnection accuracy is experimentally demonstrated. Learning of the Perceptron network that determines sex based on input face images of 900 pixels is also successfully demonstrated.
Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA
NASA Astrophysics Data System (ADS)
Staley, T. D.; Anderson, G. E.
2015-11-01
In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.
Original data preprocessor for Femap/Nastran
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra
2016-12-01
Automatic data processing and visualization in the finite elements analysis of the structural problems is a long run concern in mechanical engineering. The paper presents the `common database' concept according to which the same information may be accessed from an analytical model, as well as from a numerical one. In this way, input data expressed as comma-separated-value (CSV) files are loaded into the Femap/Nastran environment using original API codes, being automatically generated: the geometry of the model, the loads and the constraints. The original API computer codes are general, being possible to generate the input data of any model. In the next stages, the user may create the discretization of the model, set the boundary conditions and perform a given analysis. If additional accuracy is needed, the analyst may delete the previous discretizations and using the same information automatically loaded, other discretizations and analyses may be done. Moreover, if new more accurate information regarding the loads or constraints is acquired, they may be modelled and then implemented in the data generating program which creates the `common database'. This means that new more accurate models may be easily generated. Other facility consists of the opportunity to control the CSV input files, several loading scenarios being possible to be generated in Femap/Nastran. In this way, using original intelligent API instruments the analyst is focused to accurately model the phenomena and on creative aspects, the repetitive and time-consuming activities being performed by the original computer-based instruments. Using this data processing technique we apply to the best Asimov's principle `minimum change required / maximum desired response'.
Color image enhancement based on particle swarm optimization with Gaussian mixture
NASA Astrophysics Data System (ADS)
Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho
2015-01-01
This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.
Fast single image dehazing based on image fusion
NASA Astrophysics Data System (ADS)
Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian
2015-01-01
Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.
SVM Pixel Classification on Colour Image Segmentation
NASA Astrophysics Data System (ADS)
Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.
2018-04-01
The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuzmin, G; Lee, C; Lee, C
Purpose: Recent advances in cancer treatments have greatly increased the likelihood of post-treatment patient survival. Secondary malignancies, however, have become a growing concern. Epidemiological studies determining secondary effects in radiotherapy patients require assessment of organ-specific dose both inside and outside the treatment field. An essential input for Monte Carlo modeling of particle transport is radiological images showing full patient anatomy. However, in retrospective studies it is typical to only have partial anatomy from CT scans used during treatment planning. In this study, we developed a multi-step method to extend such limited patient anatomy to full body anatomy for estimating dosemore » to normal tissues located outside the CT scan coverage. Methods: The first step identified a phantom from a library of body size-dependent computational human phantoms by matching the height and weight of patients. Second, a Python algorithm matched the patient CT coverage location in relation to the whole body phantom. Third, an algorithm cut the whole body phantom and scaled them to match the size of the patient. Then, merged the two anatomies into one whole body. We entitled this new approach, Anatomically Predictive Extension (APE). Results: The APE method was examined by comparing the original chest-abdomen-pelvis CT images of the five patients with the APE phantoms developed from only the chest part of the CAP images and whole body phantoms. We achieved average percent differences of tissue volumes of 25.7%, 34.2%, 16.5%, 26.8%, and 31.6% with an average of 27% across all patients. Conclusion: Our APE method extends the limited CT patient anatomy to whole body anatomy by using image processing and computational human phantoms. Our ongoing work includes evaluating the accuracy of these APE phantoms by comparing normal tissue doses in the APE phantoms and doses calculated for the original full CAP images under generic radiotherapy simulations. This research was supported by the NIH Intramural Research Program.« less
NASA Astrophysics Data System (ADS)
Buisset, Christophe; Prasit, Apirat; Lépine, Thierry; Poshyajinda, Saran
2015-09-01
The first astronomical images obtained at the 2.4 m Thai National Telescope (TNT) during observations in bright moon conditions were contaminated by high levels of light scattered by the telescope structure. We identified that the origins of this scattered light were the M3 folding mirror baffle and the tube placed inside the fork between the M3 and the M4 mirrors. We thus decided to design and install a new baffle. In a first step, we calculated the optical and mechanical inputs needed to define the baffle optical design. These inputs were: the maximum length of the baffle, the maximum dimensions of the vanes and the incident beam diameter between M3 and M4 mirrors. In a second step, we defined the number, the position and the diameter of the vanes to remove the critical objects from the detector's FOV by using a targeted method. Then, we verified that the critical objects were moved away from the detector's view. In a third step, we designed and manufactured the baffle. The mechanical design is made of 21 sections (1 section for each vane) and comprises an innovative mechanism for the adjustment of the baffle position. The baffle installation and adjustment is performed in less than 20 minutes by 2 operators. In a fourth step, we installed and characterized the baffle by using a pinhole camera. We quantified the performance improvement and we identified the baffle areas at the origin of the residual stray light signal. Finally, we performed targeted on-sky observations to test the baffle in real conditions.
Zhou, Yizhou; Shaw, David; Lam, Cynthia; Tsukuda, Joni; Yim, Mandy; Tang, Danming; Louie, Salina; Laird, Michael W; Snedecor, Brad; Misaghi, Shahram
2017-09-23
Establishing that a cell line was derived from a single cell progenitor and defined as clonally-derived for the production of clinical and commercial therapeutic protein drugs has been the subject of increased emphasis in cell line development (CLD). Several regulatory agencies have expressed that the prospective probability of clonality for CHO cell lines is assumed to follow the Poisson distribution based on the input cell count. The probability of obtaining monoclonal progenitors based on the Poisson distribution of all cells suggests that one round of limiting dilution may not be sufficient to assure the resulting cell lines are clonally-derived. We experimentally analyzed clonal derivatives originating from single cell cloning (SCC) via one round of limiting dilution, following our standard legacy cell line development practice. Two cell populations with stably integrated DNA spacers were mixed and subjected to SCC via limiting dilution. Cells were cultured in the presence of selection agent, screened, and ranked based on product titer. Post-SCC, the growing cell lines were screened by PCR analysis for the presence of identifying spacers. We observed that the percentage of nonclonal populations was below 9%, which is considerably lower than the determined probability based on the Poisson distribution of all cells. These results were further confirmed using fluorescence imaging of clonal derivatives originating from SCC via limiting dilution of mixed cell populations expressing GFP or RFP. Our results demonstrate that in the presence of selection agent, the Poisson distribution of all cells clearly underestimates the probability of obtaining clonally-derived cell lines. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 2017. © 2017 American Institute of Chemical Engineers.
Evaluating the Usefulness of High-Temporal Resolution Vegetation Indices to Identify Crop Types
NASA Astrophysics Data System (ADS)
Hilbert, K.; Lewis, D.; O'Hara, C. G.
2006-12-01
The National Aeronautical and Space Agency (NASA) and the United States Department of Agriculture (USDA) jointly sponsored research covering the 2004 to 2006 South American crop seasons that focused on developing methods for the USDA's Foreign Agricultural Service's (FAS) Production Estimates and Crop Assessment Division (PECAD) to identify crop types using MODIS-derived, hyper-temporal Normalized Difference Vegetation Index (NDVI) images. NDVI images were composited in 8 day intervals from daily NDVI images and aggregated to create a hyper-termporal NDVI layerstack. This NDVI layerstack was used as input to image classification algorithms. Research results indicated that creating high-temporal resolution Normalized Difference Vegetation Index (NDVI) composites from NASA's MODerate Resolution Imaging Spectroradiometer (MODIS) data products provides useful input to crop type classifications as well as potential useful input for regional crop productivity modeling efforts. A current NASA-sponsored Rapid Prototyping Capability (RPC) experiment will assess the utility of simulated future Visible Infrared Imager / Radiometer Suite (VIIRS) imagery for conducting NDVI-derived land cover and specific crop type classifications. In the experiment, methods will be considered to refine current MODIS data streams, reduce the noise content of the MODIS, and utilize the MODIS data as an input to the VIIRS simulation process. The effort also is being conducted in concert with an ISS project that will further evaluate, verify and validate the usefulness of specific data products to provide remote sensing-derived input for the Sinclair Model a semi-mechanistic model for estimating crop yield. The study area encompasses a large portion of the Pampas region of Argentina--a major world producer of crops such as corn, soybeans, and wheat which makes it a competitor to the US. ITD partnered with researchers at the Center for Surveying Agricultural and Natural Resources (CREAN) of the National University of Cordoba, Argentina, and CREAN personnel collected and continue to collect field-level, GIS-based in situ information. Current efforts involve both developing and optimizing software tools for the necessary data processing. The software includes the Time Series Product Tool (TSPT), Leica's ERDAS Imagine, and Mississippi State University's Temporal Map Algebra computational tools.
Towards integrated modelling: full image simulations for WEAVE
NASA Astrophysics Data System (ADS)
Dalton, Gavin; Ham, Sun Jeong; Trager, Scott; Abrams, Don Carlos; Bonifacio, Piercarlo; Aguerri, J. A. L.; Middleton, Kevin; Benn, Chris; Rogers, Kevin; Stuik, Remko; Carrasco, Esperanza; Vallenari, Antonella; Jin, Shoko; Lewis, Jim
2016-08-01
We present an integrated end-end simulation of the spectral images that will be obtained by the weave spectrograph, which aims to include full modelling of all effects from the top of the atmosphere to the detector. These data are based in input spectra from a combination of library spectra and synthetic models, and will be used to provide inputs for an endend test of the full weave data pipeline and archive systems, prior to 1st light of the instrument.
Global Swath and Gridded Data Tiling
NASA Technical Reports Server (NTRS)
Thompson, Charles K.
2012-01-01
This software generates cylindrically projected tiles of swath-based or gridded satellite data for the purpose of dynamically generating high-resolution global images covering various time periods, scaling ranges, and colors called "tiles." It reconstructs a global image given a set of tiles covering a particular time range, scaling values, and a color table. The program is configurable in terms of tile size, spatial resolution, format of input data, location of input data (local or distributed), number of processes run in parallel, and data conditioning.
Microlens array processor with programmable weight mask and direct optical input
NASA Astrophysics Data System (ADS)
Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen
1999-03-01
We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.
A Web Browsing System by Eye-gaze Input
NASA Astrophysics Data System (ADS)
Abe, Kiyohiko; Owada, Kosuke; Ohi, Shoichi; Ohyama, Minoru
We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. We also developed the platform for eye-gaze input based on our system. In this paper, we propose a new web browsing system for physically disabled computer users as an application of the platform for eye-gaze input. The proposed web browsing system uses a method of direct indicator selection. The method categorizes indicators by their function. These indicators are hierarchized relations; users can select the felicitous function by switching indicators group. This system also analyzes the location of selectable object on web page, such as hyperlink, radio button, edit box, etc. This system stores the locations of these objects, in other words, the mouse cursor skips to the object of candidate input. Therefore it enables web browsing at a faster pace.
Prewarping techniques in imaging: applications in nanotechnology and biotechnology
NASA Astrophysics Data System (ADS)
Poonawala, Amyn; Milanfar, Peyman
2005-03-01
In all imaging systems, the underlying process introduces undesirable distortions that cause the output signal to be a warped version of the input. When the input to such systems can be controlled, pre-warping techniques can be employed which consist of systematically modifying the input such that it cancels out (or compensates for) the process losses. In this paper, we focus on the mask (reticle) design problem for 'optical micro-lithography', a process similar to photographic printing used for transferring binary circuit patterns onto silicon wafers. We use a pixel-based mask representation and model the above process as a cascade of convolution (aerial image formation) and thresholding (high-contrast recording) operations. The pre-distorted mask is obtained by minimizing the norm of the difference between the 'desired' output image and the 'reproduced' output image. We employ the regularization framework to ensure that the resulting masks are close-to-binary as well as simple and easy to fabricate. Finally, we provide insight into two additional applications of pre-warping techniques. First is 'e-beam lithography', used for fabricating nano-scale structures, and second is 'electronic visual prosthesis' which aims at providing limited vision to the blind by using a prosthetic retinally implanted chip capable of electrically stimulating the retinal neuron cells.
Violent Interaction Detection in Video Based on Deep Learning
NASA Astrophysics Data System (ADS)
Zhou, Peipei; Ding, Qinghai; Luo, Haibo; Hou, Xinglin
2017-06-01
Violent interaction detection is of vital importance in some video surveillance scenarios like railway stations, prisons or psychiatric centres. Existing vision-based methods are mainly based on hand-crafted features such as statistic features between motion regions, leading to a poor adaptability to another dataset. En lightened by the development of convolutional networks on common activity recognition, we construct a FightNet to represent the complicated visual violence interaction. In this paper, a new input modality, image acceleration field is proposed to better extract the motion attributes. Firstly, each video is framed as RGB images. Secondly, optical flow field is computed using the consecutive frames and acceleration field is obtained according to the optical flow field. Thirdly, the FightNet is trained with three kinds of input modalities, i.e., RGB images for spatial networks, optical flow images and acceleration images for temporal networks. By fusing results from different inputs, we conclude whether a video tells a violent event or not. To provide researchers a common ground for comparison, we have collected a violent interaction dataset (VID), containing 2314 videos with 1077 fight ones and 1237 no-fight ones. By comparison with other algorithms, experimental results demonstrate that the proposed model for violent interaction detection shows higher accuracy and better robustness.
Elliott, Jonathan T.; Samkoe, Kimberley S.; Davis, Scott C.; Gunn, Jason R.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.
2017-01-01
Receptor concentration imaging (RCI) with targeted-untargeted optical dye pairs has enabled in vivo immunohistochemistry analysis in preclinical subcutaneous tumors. Successful application of RCI to fluorescence guided resection (FGR), so that quantitative molecular imaging of tumor-specific receptors could be performed in situ, would have a high impact. However, assumptions of pharmacokinetics, permeability and retention, as well as the lack of a suitable reference region limit the potential for RCI in human neurosurgery. In this study, an arterial input graphic analysis (AIGA) method is presented which is enabled by independent component analysis (ICA). The percent difference in arterial concentration between the image-derived arterial input function (AIFICA) and that obtained by an invasive method (ICACAR) was 2.0 ± 2.7% during the first hour of circulation of a targeted-untargeted dye pair in mice. Estimates of distribution volume and receptor concentration in tumor bearing mice (n = 5) recovered using the AIGA technique did not differ significantly from values obtained using invasive AIF measurements (p=0.12). The AIGA method, enabled by the subject-specific AIFICA, was also applied in a rat orthotopic model of U-251 glioblastoma to obtain the first reported receptor concentration and distribution volume maps during open craniotomy. PMID:26349671
Joint image restoration and location in visual navigation system
NASA Astrophysics Data System (ADS)
Wu, Yuefeng; Sang, Nong; Lin, Wei; Shao, Yuanjie
2018-02-01
Image location methods are the key technologies of visual navigation, most previous image location methods simply assume the ideal inputs without taking into account the real-world degradations (e.g. low resolution and blur). In view of such degradations, the conventional image location methods first perform image restoration and then match the restored image on the reference image. However, the defective output of the image restoration can affect the result of localization, by dealing with the restoration and location separately. In this paper, we present a joint image restoration and location (JRL) method, which utilizes the sparse representation prior to handle the challenging problem of low-quality image location. The sparse representation prior states that the degraded input image, if correctly restored, will have a good sparse representation in terms of the dictionary constructed from the reference image. By iteratively solving the image restoration in pursuit of the sparest representation, our method can achieve simultaneous restoration and location. Based on such a sparse representation prior, we demonstrate that the image restoration task and the location task can benefit greatly from each other. Extensive experiments on real scene images with Gaussian blur are carried out and our joint model outperforms the conventional methods of treating the two tasks independently.
Weather Correlations to Calculate Infiltration Rates for U. S. Commercial Building Energy Models.
Ng, Lisa C; Quiles, Nelson Ojeda; Dols, W Stuart; Emmerich, Steven J
2018-01-01
As building envelope performance improves, a greater percentage of building energy loss will occur through envelope leakage. Although the energy impacts of infiltration on building energy use can be significant, current energy simulation software have limited ability to accurately account for envelope infiltration and the impacts of improved airtightness. This paper extends previous work by the National Institute of Standards and Technology that developed a set of EnergyPlus inputs for modeling infiltration in several commercial reference buildings using Chicago weather. The current work includes cities in seven additional climate zones and uses the updated versions of the prototype commercial building types developed by the Pacific Northwest National Laboratory for the U. S. Department of Energy. Comparisons were made between the predicted infiltration rates using three representations of the commercial building types: PNNL EnergyPlus models, CONTAM models, and EnergyPlus models using the infiltration inputs developed in this paper. The newly developed infiltration inputs in EnergyPlus yielded average annual increases of 3 % and 8 % in the HVAC electrical and gas use, respectively, over the original infiltration inputs in the PNNL EnergyPlus models. When analyzing the benefits of building envelope airtightening, greater HVAC energy savings were predicted using the newly developed infiltration inputs in EnergyPlus compared with using the original infiltration inputs. These results indicate that the effects of infiltration on HVAC energy use can be significant and that infiltration can and should be better accounted for in whole-building energy models.
Slyter, Leonard L.
1975-01-01
An artifical rumen continuous culture with pH control, automated input of water-soluble and water-insoluble substrates, controlled mixing of contents, and a collection system for gas is described. Images PMID:16350029
Effects of empty bins on image upscaling in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2017-07-01
This paper presents a preliminary study of the effect of empty bins on image upscaling in capsule endoscopy. The presented study was conducted based on results of existing contrast enhancement and interpolation methods. A low contrast enhancement method based on pixels consecutiveness and modified bilinear weighting scheme has been developed to distinguish between necessary empty bins and unnecessary empty bins in the effort to minimize the number of empty bins in the input image, before further processing. Linear interpolation methods have been used for upscaling input images with stretched histograms. Upscaling error differences and similarity indices between pairs of interpolation methods have been quantified using the mean squared error and feature similarity index techniques. Simulation results demonstrated more promising effects using the developed method than other contrast enhancement methods mentioned.
Medical Image Intensifier In 1980 (What Really Happened)
NASA Astrophysics Data System (ADS)
Baiter, Stephen; Kuhl, Walter
1980-08-01
In 1972, at the first SPIE seminar covering the application of optical instrumentation in medicine, Balter and Stanton presented a paper forecasting the status of x-ray image intensifiers in the year 1980. Now, eight years later, it is 1980, and it seems a good idea to evaluate these forecasts in the light of what has actually happened. The x-ray sensitive image intensifier tube (with cesium iodide as an input phosphor) is used nearly universally. Input screen sizes range from 15 cm to 36 cm in diameter. Real time monitoring of both fluoroscopic and fluorographic examinations is generally performed via closed circuit television. Archival recording of images is carried out using cameras with film formats of approximately 100 mm for single exposure or serial fluorography and 35 mm for cine fluorography. With the detective quantum efficiency of image intensifier tubes remaining near 50% throughout the decade, the noise content of most fluorographic and fluoroscopic images is still determined by the input exposure. Consequently, patient doses today, in 1980, have not substantially changed in the last ten years. There is, however, interest in uncoupling the x-ray dose and the image brightness by providing a variable optical diaphragm between the output of the image intensifier tube and the recording devices. During the past eight years, there has been a major philosophical change in the approach to imaging systems. It is now realized that medical image quality is much more dependent on the reduction of large area contrast losses than on the limiting resolution of the imaging system. It has also been clear that much diagnostic information is carried by spatial frequencies in the neighborhood of one line pair per millimeter (referred to the patient). The design of modern image intensifiers has been directed toward improvement in the large area contrast by minimizing x-ray and optical scatter in both the image intensifier tube and its associated components.
Tone mapping infrared images using conditional filtering-based multi-scale retinex
NASA Astrophysics Data System (ADS)
Luo, Haibo; Xu, Lingyun; Hui, Bin; Chang, Zheng
2015-10-01
Tone mapping can be used to compress the dynamic range of the image data such that it can be fitted within the range of the reproduction media and human vision. The original infrared images that captured with infrared focal plane arrays (IFPA) are high dynamic images, so tone mapping infrared images is an important component in the infrared imaging systems, and it has become an active topic in recent years. In this paper, we present a tone mapping framework using multi-scale retinex. Firstly, a Conditional Gaussian Filter (CGF) was designed to suppress "halo" effect. Secondly, original infrared image is decomposed into a set of images that represent the mean of the image at different spatial resolutions by applying CGF of different scale. And then, a set of images that represent the multi-scale details of original image is produced by dividing the original image pointwise by the decomposed image. Thirdly, the final detail image is reconstructed by weighted sum of the multi-scale detail images together. Finally, histogram scaling and clipping is adopted to remove outliers and scale the detail image, 0.1% of the pixels are clipped at both extremities of the histogram. Experimental results show that the proposed algorithm efficiently increases the local contrast while preventing "halo" effect and provides a good rendition of visual effect.
Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing.
Patorski, Krzysztof; Trusiak, Maciej; Tkaczyk, Tomasz
2014-04-21
We introduce a fast, simple, adaptive and experimentally robust method for reconstructing background-rejected optically-sectioned images using two-shot structured illumination microscopy. Our innovative data demodulation method needs two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement between two frames is not required. Upon frames subtraction the input pattern with increased grid modulation is obtained. The first demodulation stage comprises two-dimensional data processing based on the empirical mode decomposition for the object spatial frequency selection (noise reduction and bias term removal). The second stage consists in calculating high contrast image using the two-dimensional spiral Hilbert transform. Our algorithm effectiveness is compared with the results calculated for the same input data using structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. Results of our approach compare very favorably with SIM and HiLo techniques.
Constraints in distortion-invariant target recognition system simulation
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Razzaque, Md A.
2000-11-01
Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.
SU-E-J-48: Imaging Origin-Radiation Isocenter Coincidence for Linac-Based SRS with Novalis Tx
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geraghty, C; Workie, D; Hasson, B
Purpose To implement and evaluate an image-based Winston-Lutz (WL) test to measure the displacement between ExacTrac imaging origin and radiation isocenter on a Novalis Tx system using RIT V6.2 software analysis tools. Displacement between imaging and radiation isocenters was tracked over time. The method was applied for cone-based and MLC-based WL tests. Methods The Brainlab Winston-Lutz phantom was aligned to room lasers. The ExacTrac imaging system was then used to detect the Winston- Lutz phantom and obtain the displacement between the center of the phantom and the imaging origin. EPID images of the phantom were obtained at various gantry andmore » couch angles and analyzed with RIT calculating the phantom center to radiation isocenter displacement. The RIT and Exactrac displacements were combined to calculate the displacement between imaging origin and radiation isocenter. Results were tracked over time. Results Mean displacements between ExacTrac origin and radiation isocenter were: VRT: −0.1mm ± 0.3mm, LNG: 0.5mm ± 0.2mm, LAT: 0.2mm ± 0.2mm (vector magnitude of 0.7 ± 0.2mm). Radiation isocenter was characterized by the mean of the standard deviations of the WL phantom displacements: σVRT: 0.2mm, σLNG: 0.4mm, σLAT: 0.6mm. The linac couch base was serviced to reduce couch walkout. This reduced σLAT to 0.2mm. These measurements established a new baseline of radiation isocenter-imaging origin coincidence. Conclusion The image-based WL test has ensured submillimeter localization accuracy using the ExacTrac imaging system. Standard deviations of ExacTrac-radiation isocenter displacements indicate that average agreement within 0.3mm is possible in each axis. This WL test is a departure from the tradiational WL in that imaging origin/radiation isocenter agreement is the end goal not lasers/radiation isocenter.« less
Classifying magnetic resonance image modalities with convolutional neural networks
NASA Astrophysics Data System (ADS)
Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis
2018-02-01
Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.
Pattern-Recognition Processor Using Holographic Photopolymer
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Cammack, Kevin
2006-01-01
proposed joint-transform optical correlator (JTOC) would be capable of operating as a real-time pattern-recognition processor. The key correlation-filter reading/writing medium of this JTOC would be an updateable holographic photopolymer. The high-resolution, high-speed characteristics of this photopolymer would enable pattern-recognition processing to occur at a speed three orders of magnitude greater than that of state-of-the-art digital pattern-recognition processors. There are many potential applications in biometric personal identification (e.g., using images of fingerprints and faces) and nondestructive industrial inspection. In order to appreciate the advantages of the proposed JTOC, it is necessary to understand the principle of operation of a conventional JTOC. In a conventional JTOC (shown in the upper part of the figure), a collimated laser beam passes through two side-by-side spatial light modulators (SLMs). One SLM displays a real-time input image to be recognized. The other SLM displays a reference image from a digital memory. A Fourier-transform lens is placed at its focal distance from the SLM plane, and a charge-coupled device (CCD) image detector is placed at the back focal plane of the lens for use as a square-law recorder. Processing takes place in two stages. In the first stage, the CCD records the interference pattern between the Fourier transforms of the input and reference images, and the pattern is then digitized and saved in a buffer memory. In the second stage, the reference SLM is turned off and the interference pattern is fed back to the input SLM. The interference pattern thus becomes Fourier-transformed, yielding at the CCD an image representing the joint-transform correlation between the input and reference images. This image contains a sharp correlation peak when the input and reference images are matched. The drawbacks of a conventional JTOC are the following: The CCD has low spatial resolution and is not an ideal square-law detector for the purpose of holographic recording of interference fringes. A typical state-of-the-art CCD has a pixel-pitch limited resolution of about 100 lines/mm. In contrast, the holographic photopolymer to be used in the proposed JTOC offers a resolution > 2,000 lines/mm. In addition to being disadvantageous in itself, the low resolution of the CCD causes overlap of a DC term and the desired correlation term in the output image. This overlap severely limits the correlation signal-to-noise ratio. The two-stage nature of the process limits the achievable throughput rate. A further limit is imposed by the low frame rate (typical video rates) of low- and medium-cost commercial CCDs.
Software for Automated Image-to-Image Co-registration
NASA Technical Reports Server (NTRS)
Benkelman, Cody A.; Hughes, Heidi
2007-01-01
The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.
NASA Astrophysics Data System (ADS)
Dashevskiĭ, B. E.; Podvyaznikov, V. A.; Prokhorov, A. M.; Chevokin, V. K.
1989-08-01
An image converter with interchangeable photocathodes was used in tests on a microchannel plate employed as a photoemitter. The image converter was operated in the linear slit-scanning regime. This image converter was found to be a promising tool for laser plasma diagnostics.
Detection of vehicle parts based on Faster R-CNN and relative position information
NASA Astrophysics Data System (ADS)
Zhang, Mingwen; Sang, Nong; Chen, Youbin; Gao, Changxin; Wang, Yongzhong
2018-03-01
Detection and recognition of vehicles are two essential tasks in intelligent transportation system (ITS). Currently, a prevalent method is to detect vehicle body, logo or license plate at first, and then recognize them. So the detection task is the most basic, but also the most important work. Besides the logo and license plate, some other parts, such as vehicle face, lamp, windshield and rearview mirror, are also key parts which can reflect the characteristics of vehicle and be used to improve the accuracy of recognition task. In this paper, the detection of vehicle parts is studied, and the work is novel. We choose Faster R-CNN as the basic algorithm, and take the local area of an image where vehicle body locates as input, then can get multiple bounding boxes with their own scores. If the box with maximum score is chosen as final result directly, it is often not the best one, especially for small objects. This paper presents a method which corrects original score with relative position information between two parts. Then we choose the box with maximum comprehensive score as the final result. Compared with original output strategy, the proposed method performs better.
NASA Astrophysics Data System (ADS)
Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick
2007-04-01
High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.
Open set recognition of aircraft in aerial imagery using synthetic template models
NASA Astrophysics Data System (ADS)
Bapst, Aleksander B.; Tran, Jonathan; Koch, Mark W.; Moya, Mary M.; Swahn, Robert
2017-05-01
Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications that may improve the ability of synthetic data to represent real data.
Wilson, A J; Hodge, J C
1995-08-01
To evaluate the diagnostic performance of a teleradiology system in skeletal trauma. Radiographs from 180 skeletal trauma patients were digitized (matrix, 2,000 x 2,500) and transmitted to a remote digital viewing console (1,200-line monitor). Four radiologists interpreted both the original film images and digital images. Each reader was asked to identify, locate, and characterize fractures and dislocations. Receiver operating characteristic curves were generated, and the results of the original and digitized film readings were compared. All readers performed better with the original film when interpreting fractures. Although the patterns varied between readers, all had statistically significant differences (P < .01) for the two image types. There was no statistically significant difference in performance with the two images when dislocations were diagnosed. The system tested is not a satisfactory alternative to the original radiograph for routine reading of fracture films.
Energy Input Flux in the Global Quiet-Sun Corona
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mac Cormack, Cecilia; Vásquez, Alberto M.; López Fuentes, Marcelo
We present first results of a novel technique that provides, for the first time, constraints on the energy input flux at the coronal base ( r ∼ 1.025 R {sub ⊙}) of the quiet Sun at a global scale. By combining differential emission measure tomography of EUV images, with global models of the coronal magnetic field, we estimate the energy input flux at the coronal base that is required to maintain thermodynamically stable structures. The technique is described in detail and first applied to data provided by the Extreme Ultraviolet Imager instrument, on board the Solar TErrestrial RElations Observatory mission,more » and the Atmospheric Imaging Assembly instrument, on board the Solar Dynamics Observatory mission, for two solar rotations with different levels of activity. Our analysis indicates that the typical energy input flux at the coronal base of magnetic loops in the quiet Sun is in the range ∼0.5–2.0 × 10{sup 5} (erg s{sup −1} cm{sup −2}), depending on the structure size and level of activity. A large fraction of this energy input, or even its totality, could be accounted for by Alfvén waves, as shown by recent independent observational estimates derived from determinations of the non-thermal broadening of spectral lines in the coronal base of quiet-Sun regions. This new tomography product will be useful for the validation of coronal heating models in magnetohydrodinamic simulations of the global corona.« less
BaTMAn: Bayesian Technique for Multi-image Analysis
NASA Astrophysics Data System (ADS)
Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.
2016-12-01
Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.
Wei, Hong; Li, Zhipeng; Tian, Xiaorui; Wang, Zhuoxian; Cong, Fengzi; Liu, Ning; Zhang, Shunping; Nordlander, Peter; Halas, Naomi J; Xu, Hongxing
2011-02-09
We show that the local electric field distribution of propagating plasmons along silver nanowires can be imaged by coating the nanowires with a layer of quantum dots, held off the surface of the nanowire by a nanoscale dielectric spacer layer. In simple networks of silver nanowires with two optical inputs, control of the optical polarization and phase of the input fields directs the guided waves to a specific nanowire output. The QD-luminescent images of these structures reveal that a complete family of phase-dependent, interferometric logic functions can be performed on these simple networks. These results show the potential for plasmonic waveguides to support compact interferometric logic operations.
Image quality assessment for video stream recognition systems
NASA Astrophysics Data System (ADS)
Chernov, Timofey S.; Razumnuy, Nikita P.; Kozharinov, Alexander S.; Nikolaev, Dmitry P.; Arlazarov, Vladimir V.
2018-04-01
Recognition and machine vision systems have long been widely used in many disciplines to automate various processes of life and industry. Input images of optical recognition systems can be subjected to a large number of different distortions, especially in uncontrolled or natural shooting conditions, which leads to unpredictable results of recognition systems, making it impossible to assess their reliability. For this reason, it is necessary to perform quality control of the input data of recognition systems, which is facilitated by modern progress in the field of image quality evaluation. In this paper, we investigate the approach to designing optical recognition systems with built-in input image quality estimation modules and feedback, for which the necessary definitions are introduced and a model for describing such systems is constructed. The efficiency of this approach is illustrated by the example of solving the problem of selecting the best frames for recognition in a video stream for a system with limited resources. Experimental results are presented for the system for identity documents recognition, showing a significant increase in the accuracy and speed of the system under simulated conditions of automatic camera focusing, leading to blurring of frames.
Java Library for Input and Output of Image Data and Metadata
NASA Technical Reports Server (NTRS)
Deen, Robert; Levoe, Steven
2003-01-01
A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata.
Image quality assessment using deep convolutional networks
NASA Astrophysics Data System (ADS)
Li, Yezhou; Ye, Xiang; Li, Yong
2017-12-01
This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.
Fusion of shallow and deep features for classification of high-resolution remote sensing images
NASA Astrophysics Data System (ADS)
Gao, Lang; Tian, Tian; Sun, Xiao; Li, Hang
2018-02-01
Effective spectral and spatial pixel description plays a significant role for the classification of high resolution remote sensing images. Current approaches of pixel-based feature extraction are of two main kinds: one includes the widelyused principal component analysis (PCA) and gray level co-occurrence matrix (GLCM) as the representative of the shallow spectral and shape features, and the other refers to the deep learning-based methods which employ deep neural networks and have made great promotion on classification accuracy. However, the former traditional features are insufficient to depict complex distribution of high resolution images, while the deep features demand plenty of samples to train the network otherwise over fitting easily occurs if only limited samples are involved in the training. In view of the above, we propose a GLCM-based convolution neural network (CNN) approach to extract features and implement classification for high resolution remote sensing images. The employment of GLCM is able to represent the original images and eliminate redundant information and undesired noises. Meanwhile, taking shallow features as the input of deep network will contribute to a better guidance and interpretability. In consideration of the amount of samples, some strategies such as L2 regularization and dropout methods are used to prevent over-fitting. The fine-tuning strategy is also used in our study to reduce training time and further enhance the generalization performance of the network. Experiments with popular data sets such as PaviaU data validate that our proposed method leads to a performance improvement compared to individual involved approaches.
NASA Astrophysics Data System (ADS)
Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang
2015-05-01
In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.
Electro-Optical Imaging Fourier-Transform Spectrometer
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Zhou, Hanying
2006-01-01
An electro-optical (E-O) imaging Fourier-transform spectrometer (IFTS), now under development, is a prototype of improved imaging spectrometers to be used for hyperspectral imaging, especially in the infrared spectral region. Unlike both imaging and non-imaging traditional Fourier-transform spectrometers, the E-O IFTS does not contain any moving parts. Elimination of the moving parts and the associated actuator mechanisms and supporting structures would increase reliability while enabling reductions in size and mass, relative to traditional Fourier-transform spectrometers that offer equivalent capabilities. Elimination of moving parts would also eliminate the vibrations caused by the motions of those parts. Figure 1 schematically depicts a traditional Fourier-transform spectrometer, wherein a critical time delay is varied by translating one the mirrors of a Michelson interferometer. The time-dependent optical output is a periodic representation of the input spectrum. Data characterizing the input spectrum are generated through fast-Fourier-transform (FFT) post-processing of the output in conjunction with the varying time delay.
Fourier-Mellin moment-based intertwining map for image encryption
NASA Astrophysics Data System (ADS)
Kaur, Manjit; Kumar, Vijay
2018-03-01
In this paper, a robust image encryption technique that utilizes Fourier-Mellin moments and intertwining logistic map is proposed. Fourier-Mellin moment-based intertwining logistic map has been designed to overcome the issue of low sensitivity of an input image. Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA-II) based on Reinforcement Learning (MNSGA-RL) has been used to optimize the required parameters of intertwining logistic map. Fourier-Mellin moments are used to make the secret keys more secure. Thereafter, permutation and diffusion operations are carried out on input image using secret keys. The performance of proposed image encryption technique has been evaluated on five well-known benchmark images and also compared with seven well-known existing encryption techniques. The experimental results reveal that the proposed technique outperforms others in terms of entropy, correlation analysis, a unified average changing intensity and the number of changing pixel rate. The simulation results reveal that the proposed technique provides high level of security and robustness against various types of attacks.
Compact hybrid optoelectrical unit for image processing and recognition
NASA Astrophysics Data System (ADS)
Cheng, Gang; Jin, Guofan; Wu, Minxian; Liu, Haisong; He, Qingsheng; Yuan, ShiFu
1998-07-01
In this paper a compact opto-electric unit (CHOEU) for digital image processing and recognition is proposed. The central part of CHOEU is an incoherent optical correlator, which is realized with a SHARP QA-1200 8.4 inch active matrix TFT liquid crystal display panel which is used as two real-time spatial light modulators for both the input image and reference template. CHOEU can do two main processing works. One is digital filtering; the other is object matching. Using CHOEU an edge-detection operator is realized to extract the edges from the input images. Then the reprocessed images are sent into the object recognition unit for identifying the important targets. A novel template- matching method is proposed for gray-tome image recognition. A positive and negative cycle-encoding method is introduced to realize the absolute difference measurement pixel- matching on a correlator structure simply. The system has god fault-tolerance ability for rotation distortion, Gaussian noise disturbance or information losing. The experiments are given at the end of this paper.
Kychakoff, George; Afromowitz, Martin A; Hugle, Richard E
2005-06-21
A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions and about 4 or 8.7 microns and directly producing images of the interior of the boiler. An image pre-processing circuit (95) in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. An image segmentation module (105) for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. An image-understanding unit (115) matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system (130) for more efficient operation of the plant pendant tube cleaning and operating systems.
Digital Synchronizer without Metastability
NASA Technical Reports Server (NTRS)
Simle, Robert M.; Cavazos, Jose A.
2009-01-01
A proposed design for a digital synchronizing circuit would eliminate metastability that plagues flip-flop circuits in digital input/output interfaces. This metastability is associated with sampling, by use of flip-flops, of an external signal that is asynchronous with a clock signal that drives the flip-flops: it is a temporary flip-flop failure that can occur when a rising or falling edge of an asynchronous signal occurs during the setup and/or hold time of a flip-flop. The proposed design calls for (1) use of a clock frequency greater than the frequency of the asynchronous signal, (2) use of flip-flop asynchronous preset or clear signals for the asynchronous input, (3) use of a clock asynchronous recovery delay with pulse width discriminator, and (4) tying the data inputs to constant logic levels to obtain (5) two half-rate synchronous partial signals - one for the falling and one for the rising edge. Inasmuch as the flip-flop data inputs would be permanently tied to constant logic levels, setup and hold times would not be violated. The half-rate partial signals would be recombined to construct a signal that would replicate the original asynchronous signal at its original rate but would be synchronous with the clock signal.
A Software Package for Neural Network Applications Development
NASA Technical Reports Server (NTRS)
Baran, Robert H.
1993-01-01
Original Backprop (Version 1.2) is an MS-DOS package of four stand-alone C-language programs that enable users to develop neural network solutions to a variety of practical problems. Original Backprop generates three-layer, feed-forward (series-coupled) networks which map fixed-length input vectors into fixed length output vectors through an intermediate (hidden) layer of binary threshold units. Version 1.2 can handle up to 200 input vectors at a time, each having up to 128 real-valued components. The first subprogram, TSET, appends a number (up to 16) of classification bits to each input, thus creating a training set of input output pairs. The second subprogram, BACKPROP, creates a trilayer network to do the prescribed mapping and modifies the weights of its connections incrementally until the training set is leaned. The learning algorithm is the 'back-propagating error correction procedures first described by F. Rosenblatt in 1961. The third subprogram, VIEWNET, lets the trained network be examined, tested, and 'pruned' (by the deletion of unnecessary hidden units). The fourth subprogram, DONET, makes a TSR routine by which the finished product of the neural net design-and-training exercise can be consulted under other MS-DOS applications.
Modelling land use change in the Ganga basin
NASA Astrophysics Data System (ADS)
Moulds, Simon; Mijic, Ana; Buytaert, Wouter
2014-05-01
Over recent decades the green revolution in India has driven substantial environmental change. Modelling experiments have identified northern India as a "hot spot" of land-atmosphere coupling strength during the boreal summer. However, there is a wide range of sensitivity of atmospheric variables to soil moisture between individual climate models. The lack of a comprehensive land use change dataset to force climate models has been identified as a major contributor to model uncertainty. This work aims to construct a monthly time series dataset of land use change for the period 1966 to 2007 for northern India to improve the quantification of regional hydrometeorological feedbacks. The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on board the Aqua and Terra satellites provides near-continuous remotely sensed datasets from 2000 to the present day. However, the quality and availability of satellite products before 2000 is poor. To complete the dataset MODIS images are extrapolated back in time using the Conversion of Land Use and its Effects at Small regional extent (CLUE-S) modelling framework, recoded in the R programming language to overcome limitations of the original interface. Non-spatial estimates of land use area published by the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) for the study period, available on an annual, district-wise basis, are used as a direct model input. Land use change is allocated spatially as a function of biophysical and socioeconomic drivers identified using logistic regression. The dataset will provide an essential input to a high-resolution, physically-based land-surface model to generate the lower boundary condition to assess the impact of land use change on regional climate.
DNA-based watermarks using the DNA-Crypt algorithm.
Heider, Dominik; Barnekow, Angelika
2007-05-29
The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.
DNA-based watermarks using the DNA-Crypt algorithm
Heider, Dominik; Barnekow, Angelika
2007-01-01
Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434
An update of input instructions to TEMOD
NASA Technical Reports Server (NTRS)
1973-01-01
The theory and operation of a FORTRAN 4 computer code, designated as TEMOD, used to calcuate tubular thermoelectric generator performance is described in WANL-TME-1906. The original version of TEMOD was developed in 1969. A description is given of additions to the mathematical model and an update of the input instructions to the code. Although the basic mathematical model described in WANL-TME-1906 has remained unchanged, a substantial number of input/output options were added to allow completion of module performance parametrics as required in support of the compact thermoelectric converter system technology program.
Stereo reconstruction from multiperspective panoramas.
Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard
2004-01-01
A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation.
Hyperspectral imaging for diagnosis and quality control in agri-food and industrial sectors
NASA Astrophysics Data System (ADS)
García-Allende, P. Beatriz; Conde, Olga M.; Mirapeix, Jesus; Cobo, Adolfo; Lopez-Higuera, Jose M.
2010-04-01
Optical spectroscopy has been utilized in various fields of science, industry and medicine, since each substance is discernible from all others by its spectral properties. However, optical spectroscopy traditionally generates information on the bulk properties of the whole sample, and mainly in the agri-food industry some product properties result from the heterogeneity in its composition. This monitoring is considerably more challenging and can be successfully achieved by the so-called hyperspectral imaging technology, which allows the simultaneous determination of the optical spectrum and the spatial location of an object in a surface. In addition, it is a nonintrusive and non-contact technique which gives rise to a great potential for industrial applications and it does not require any particular preparation of the samples, which is a primary concern in food monitoring. This work illustrates an overview of approaches based on this technology to address different problems in agri-food and industrial sectors. The hyperspectral system was originally designed and tested for raw material on-line discrimination, which is a key factor in the input stages of many industrial sectors. The combination of the acquisition of the spectral information across transversal lines while materials are being transported on a conveyor belt, and appropriate image analyses have been successfully validated in the tobacco industry. Lastly, the use of imaging spectroscopy applied to online welding quality monitoring is discussed and compared with traditional spectroscopic approaches in this regard.
Connections of cat auditory cortex: III. Corticocortical system.
Lee, Charles C; Winer, Jeffery A
2008-04-20
The mammalian auditory cortex (AC) is essential for computing the source and decoding the information contained in sound. Knowledge of AC corticocortical connections is modest other than in the primary auditory regions, nor is there an anatomical framework in the cat for understanding the patterns of connections among the many auditory areas. To address this issue we investigated cat AC connectivity in 13 auditory regions. Retrograde tracers were injected in the same area or in different areas to reveal the areal and laminar sources of convergent input to each region. Architectonic borders were established in Nissl and SMI-32 immunostained material. We assessed the topography, convergence, and divergence of the labeling. Intrinsic input constituted >50% of the projection cells in each area, and extrinsic inputs were strongest from functionally related areas. Each area received significant convergent ipsilateral input from several fields (5 to 8; mean 6). These varied in their laminar origin and projection density. Major extrinsic projections were preferentially from areas of the same functional type (tonotopic to tonotopic, nontonotopic to nontonotopic, limbic-related to limbic-related, multisensory-to-multisensory), while smaller projections link areas belonging to different groups. Branched projections between areas were <2% with deposits of two tracers in an area or in different areas. All extrinsic projections to each area were highly and equally topographic and clustered. Intrinsic input arose from all layers except layer I, and extrinsic input had unique, area-specific infragranular and supragranular origins. The many areal and laminar sources of input may contribute to the complexity of physiological responses in AC and suggest that many projections of modest size converge within each area rather than a simpler area-to-area serial or hierarchical pattern of corticocortical connectivity. (c) 2008 Wiley-Liss, Inc.
Remote media vision-based computer input device
NASA Astrophysics Data System (ADS)
Arabnia, Hamid R.; Chen, Ching-Yi
1991-11-01
In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.
Face recognition via edge-based Gabor feature representation for plastic surgery-altered images
NASA Astrophysics Data System (ADS)
Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.
2014-12-01
Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.
New Images of the Solar Corona
NASA Astrophysics Data System (ADS)
Gurman, Joseph B.; Thompson, Barbara J.; Newmark, Jeffrey A.; Deforest, Craig E.
In 1.5 years of operation, The Extreme Ultraviolet Imaging Telescope (EIT) on SOHO has obtained over 40,000 images of the Sun in four wavebands between 171 Angstroms and 304 Angstroms, with spatial resolution limited only by the pixel scale of 2.59 arcsec. These images, and in particular compilations of time series of images into digital movies, have changed several of our ideas about the corona at temperatures of 0.9 - 2.5 MK. For the first time, we are able to see outflow in polar plumes and microjets inputting momentum into the high-speed, polar wind flow. For the first time, in conjunction with the LASCO coronagraphs and ground-based He I imagers, we have been able to see all the structures involved in coronal mass ejections (CMEs), from the surface of the Sun to 30 solar radii above it. In several cases, we have been able to observe directly the dramatic Moreton waves emanating from the active region where the CMEs originate, and radiating across virtually the entire visible hemisphere of the Sun. We interpret these large-scale coronal disturbances as fast-mode waves. Such events appear in the SOHO-LASCO coronagraphs as earthward-directed, and several have been detected by solar wind monitoring experiments on SOHO and other spacecraft. We have been able to view a variety of small-scale phenomena as well, including motions in prominences and filaments, macrospicular and polar microjet eruptions, and fine structures in the polar crown filament belt. The multi-wavelength capability of EIT makes it possible to determine the temperature of the coronal plasma and, here, too, we have been afforded a novel view: the heating in coronal active regions occurs over a considerably larger area than the high-density loops structures alone (i.e., bright features) would indicate.
A Support System for Mouse Operations Using Eye-Gaze Input
NASA Astrophysics Data System (ADS)
Abe, Kiyohiko; Nakayama, Yasuhiro; Ohi, Shoichi; Ohyama, Minoru
We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. Our conventional eye-gaze input system can detect horizontal eye-gaze with a high degree of accuracy. However, it can only classify vertical eye-gaze into 3 directions (up, middle and down). In this paper, we propose a new method for vertical eye-gaze detection. This method utilizes the limbus tracking method for vertical eye-gaze detection. Therefore our new eye-gaze input system can detect the two-dimension coordinates of user's gazing point. By using this method, we develop a new support system for mouse operation. This system can move the mouse cursor to user's gazing point.
Moeskops, Pim; de Bresser, Jeroen; Kuijf, Hugo J; Mendrik, Adriënne M; Biessels, Geert Jan; Pluim, Josien P W; Išgum, Ivana
2018-01-01
Automatic segmentation of brain tissues and white matter hyperintensities of presumed vascular origin (WMH) in MRI of older patients is widely described in the literature. Although brain abnormalities and motion artefacts are common in this age group, most segmentation methods are not evaluated in a setting that includes these items. In the present study, our tissue segmentation method for brain MRI was extended and evaluated for additional WMH segmentation. Furthermore, our method was evaluated in two large cohorts with a realistic variation in brain abnormalities and motion artefacts. The method uses a multi-scale convolutional neural network with a T 1 -weighted image, a T 2 -weighted fluid attenuated inversion recovery (FLAIR) image and a T 1 -weighted inversion recovery (IR) image as input. The method automatically segments white matter (WM), cortical grey matter (cGM), basal ganglia and thalami (BGT), cerebellum (CB), brain stem (BS), lateral ventricular cerebrospinal fluid (lvCSF), peripheral cerebrospinal fluid (pCSF), and WMH. Our method was evaluated quantitatively with images publicly available from the MRBrainS13 challenge ( n = 20), quantitatively and qualitatively in relatively healthy older subjects ( n = 96), and qualitatively in patients from a memory clinic ( n = 110). The method can accurately segment WMH (Overall Dice coefficient in the MRBrainS13 data of 0.67) without compromising performance for tissue segmentations (Overall Dice coefficients in the MRBrainS13 data of 0.87 for WM, 0.85 for cGM, 0.82 for BGT, 0.93 for CB, 0.92 for BS, 0.93 for lvCSF, 0.76 for pCSF). Furthermore, the automatic WMH volumes showed a high correlation with manual WMH volumes (Spearman's ρ = 0.83 for relatively healthy older subjects). In both cohorts, our method produced reliable segmentations (as determined by a human observer) in most images (relatively healthy/memory clinic: tissues 88%/77% reliable, WMH 85%/84% reliable) despite various degrees of brain abnormalities and motion artefacts. In conclusion, this study shows that a convolutional neural network-based segmentation method can accurately segment brain tissues and WMH in MR images of older patients with varying degrees of brain abnormalities and motion artefacts.
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.
1987-01-01
An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.
Programmable Remapper with Single Flow Architecture
NASA Technical Reports Server (NTRS)
Fisher, Timothy E. (Inventor)
1993-01-01
An apparatus for image processing comprising a camera for receiving an original visual image and transforming the original visual image into an analog image, a first converter for transforming the analog image of the camera to a digital image, a processor having a single flow architecture for receiving the digital image and producing, with a single algorithm, an output image, a second converter for transforming the digital image of the processor to an analog image, and a viewer for receiving the analog image, transforming the analog image into a transformed visual image for observing the transformations applied to the original visual image. The processor comprises one or more subprocessors for the parallel reception of a digital image for producing an output matrix of the transformed visual image. More particularly, the processor comprises a plurality of subprocessors for receiving in parallel and transforming the digital image for producing a matrix of the transformed visual image, and an output interface means for receiving the respective portions of the transformed visual image from the respective subprocessor for producing an output matrix of the transformed visual image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, A K; Koniczek, M; Antonuk, L E
Purpose: Photon counting arrays (PCAs) offer several advantages over conventional, fluence-integrating x-ray imagers, such as improved contrast by means of energy windowing. For that reason, we are exploring the feasibility and performance of PCA pixel circuitry based on polycrystalline silicon. This material, unlike the crystalline silicon commonly used in photon counting detectors, lends itself toward the economic manufacture of radiation tolerant, monolithic large area (e.g., ∼43×43 cm2) devices. In this presentation, exploration of maximum count rate, a critical performance parameter for such devices, is reported. Methods: Count rate performance for a variety of pixel circuit designs was explored through detailedmore » circuit simulations over a wide range of parameters (including pixel pitch and operating conditions) with the additional goal of preserving good energy resolution. The count rate simulations assume input events corresponding to a 72 kVp x-ray spectrum with 20 mm Al filtration interacting with a CZT detector at various input flux rates. Output count rates are determined at various photon energy threshold levels, and the percentage of counts lost (e.g., due to deadtime or pile-up) is calculated from the ratio of output to input counts. The energy resolution simulations involve thermal and flicker noise originating from each circuit element in a design. Results: Circuit designs compatible with pixel pitches ranging from 250 to 1000 µm that allow count rates over a megacount per second per pixel appear feasible. Such rates are expected to be suitable for radiographic and fluoroscopic imaging. Results for the analog front-end circuitry of the pixels show that acceptable energy resolution can also be achieved. Conclusion: PCAs created using polycrystalline silicon have the potential to offer monolithic large-area detectors with count rate performance comparable to those of crystalline silicon detectors. Further improvement through detailed circuit simulations and prototyping is expected. Partially supported by NIH grant R01-EB000558. This work was partially supported by NIH grant no. R01-EB000558.« less
NASA Astrophysics Data System (ADS)
Selva Bhuvaneswari, K.; Geetha, P.
2017-05-01
Magnetic resonance imaging segmentation refers to a process of assigning labels to set of pixels or multiple regions. It plays a major role in the field of biomedical applications as it is widely used by the radiologists to segment the medical images input into meaningful regions. In recent years, various brain tumour detection techniques are presented in the literature. The entire segmentation process of our proposed work comprises three phases: threshold generation with dynamic modified region growing phase, texture feature generation phase and region merging phase. by dynamically changing two thresholds in the modified region growing approach, the first phase of the given input image can be performed as dynamic modified region growing process, in which the optimisation algorithm, firefly algorithm help to optimise the two thresholds in modified region growing. After obtaining the region growth segmented image using modified region growing, the edges can be detected with edge detection algorithm. In the second phase, the texture feature can be extracted using entropy-based operation from the input image. In region merging phase, the results obtained from the texture feature-generation phase are combined with the results of dynamic modified region growing phase and similar regions are merged using a distance comparison between regions. After identifying the abnormal tissues, the classification can be done by hybrid kernel-based SVM (Support Vector Machine). The performance analysis of the proposed method will be carried by K-cross fold validation method. The proposed method will be implemented in MATLAB with various images.
Geographic Resources Analysis Support System (GRASS) Version 4.0 User’s Reference Manual
1992-06-01
inpur-image need not be square; before processing, the X and Y dimensions of the input-image are padded with zeroes to the next highest power of two in...structures an input kowledge /control script with an appropriate combination of map layer category values (GRASS raster map layers that contain data on...F cos(x) cosine of x (x is in degrees) F exp(x) exponential function of x F exp(x,y) x to the power y F float(x) convert x to floating point F if
Multifunction Imaging and Spectroscopic Instrument
NASA Technical Reports Server (NTRS)
Mouroulis, Pantazis
2004-01-01
A proposed optoelectronic instrument would perform several different spectroscopic and imaging functions that, heretofore, have been performed by separate instruments. The functions would be reflectance, fluorescence, and Raman spectroscopies; variable-color confocal imaging at two different resolutions; and wide-field color imaging. The instrument was conceived for use in examination of minerals on remote planets. It could also be used on Earth to characterize material specimens. The conceptual design of the instrument emphasizes compactness and economy, to be achieved largely through sharing of components among subsystems that perform different imaging and spectrometric functions. The input optics for the various functions would be mounted in a single optical head. With the exception of a targeting lens, the input optics would all be aimed at the same spot on a specimen, thereby both (1) eliminating the need to reposition the specimen to perform different imaging and/or spectroscopic observations and (2) ensuring that data from such observations can be correlated with respect to known positions on the specimen. The figure schematically depicts the principal components and subsystems of the instrument. The targeting lens would collect light into a multimode optical fiber, which would guide the light through a fiber-selection switch to a reflection/ fluorescence spectrometer. The switch would have four positions, enabling selection of spectrometer input from the targeting lens, from either of one or two multimode optical fibers coming from a reflectance/fluorescence- microspectrometer optical head, or from a dark calibration position (no fiber). The switch would be the only moving part within the instrument.
Multi-ray medical ultrasound simulation without explicit speckle modelling.
Tuzer, Mert; Yazıcı, Abdulkadir; Türkay, Rüştü; Boyman, Michael; Acar, Burak
2018-05-04
To develop a medical ultrasound (US) simulation method using T1-weighted magnetic resonance images (MRI) as the input that offers a compromise between low-cost ray-based and high-cost realistic wave-based simulations. The proposed method uses a novel multi-ray image formation approach with a virtual phased array transducer probe. A domain model is built from input MR images. Multiple virtual acoustic rays are emerged from each element of the linear transducer array. Reflected and transmitted acoustic energy at discrete points along each ray is computed independently. Simulated US images are computed by fusion of the reflected energy along multiple rays from multiple transducers, while phase delays due to differences in distances to transducers are taken into account. A preliminary implementation using GPUs is presented. Preliminary results show that the multi-ray approach is capable of generating view point-dependent realistic US images with an inherent Rician distributed speckle pattern automatically. The proposed simulator can reproduce the shadowing artefacts and demonstrates frequency dependence apt for practical training purposes. We also have presented preliminary results towards the utilization of the method for real-time simulations. The proposed method offers a low-cost near-real-time wave-like simulation of realistic US images from input MR data. It can further be improved to cover the pathological findings using an improved domain model, without any algorithmic updates. Such a domain model would require lesion segmentation or manual embedding of virtual pathologies for training purposes.
Reducible dictionaries for single image super-resolution based on patch matching and mean shifting
NASA Astrophysics Data System (ADS)
Rasti, Pejman; Nasrollahi, Kamal; Orlova, Olga; Tamberg, Gert; Moeslund, Thomas B.; Anbarjafari, Gholamreza
2017-03-01
A single-image super-resolution (SR) method is proposed. The proposed method uses a generated dictionary from pairs of high resolution (HR) images and their corresponding low resolution (LR) representations. First, HR images and the corresponding LR ones are divided into patches of HR and LR, respectively, and then they are collected into separate dictionaries. Afterward, when performing SR, the distance between every patch of the input LR image and those of available LR patches in the LR dictionary is calculated. The minimum distance between the input LR patch and those in the LR dictionary is taken, and its counterpart from the HR dictionary is passed through an illumination enhancement process. By this technique, the noticeable change of illumination between neighbor patches in the super-resolved image is significantly reduced. The enhanced HR patch represents the HR patch of the super-resolved image. Finally, to remove the blocking effect caused by merging the patches, an average of the obtained HR image and the interpolated image obtained using bicubic interpolation is calculated. The quantitative and qualitative analyses show the superiority of the proposed technique over the conventional and state-of-art methods.
Earth Resources Technology Satellite: US standard catalog No. U-12
NASA Technical Reports Server (NTRS)
1973-01-01
To provide dissemination of information regarding the availability of Earth Resources Technology Satellite (ERTS) imagery, a U.S. Standard Catalog is published on a monthly schedule. The catalogs identify imagery which has been processed and input to the data files during the preceding month. The U.S. Standard Catalog includes imagery covering the Continental United States, Alaska, and Hawaii. As a supplement to these catalogs, an inventory of ERTS imagery on 16 millimeter microfilm is available. The catalogs consist of four parts: (1) annotated maps which graphically depict the geographic areas covered by the imagery listed in the current catalog, (2) a computer-generated listing organized by observation identification number (D) with pertinent information on each image, (3) a computer listing of observations organized by longitude and latitude, and (4) observations which have had changes made in their catalog information since the original entry in the data base.
Multi-Sensor Characterization of the Boreal Forest: Initial Findings
NASA Technical Reports Server (NTRS)
Reith, Ernest; Roberts, Dar A.; Prentiss, Dylan
2001-01-01
Results are presented in an initial apriori knowledge approach toward using complementary multi-sensor multi-temporal imagery in characterizing vegetated landscapes over a site in the Boreal Ecosystem-Atmosphere Study (BOREAS). Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data were segmented using multiple endmember spectral mixture analysis and binary decision tree approaches. Individual date/sensor land cover maps had overall accuracies between 55.0% - 69.8%. The best eight land cover layers from all dates and sensors correctly characterized 79.3% of the cover types. An overlay approach was used to create a final land cover map. An overall accuracy of 71.3% was achieved in this multi-sensor approach, a 1.5% improvement over our most accurate single scene technique, but 8% less than the original input. Black spruce was evaluated to be particularly undermapped in the final map possibly because it was also contained within jack pine and muskeg land coverages.
Extraction of texture features with a multiresolution neural network
NASA Astrophysics Data System (ADS)
Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.
1992-09-01
Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.
Efficient image enhancement using sparse source separation in the Retinex theory
NASA Astrophysics Data System (ADS)
Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik
2017-11-01
Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.
Origin and Ion Charge State Evolution of Solar Wind Transients 4 - 7 August 2011
NASA Astrophysics Data System (ADS)
Rodkin, Denis; Goryaev, Farid; Pagano, Paolo; Gibb, Gordon; Slemzin, Vladimir; Shugay, Yulia; Veselovsky, Igor; Mackay, Duncan
2017-04-01
Identification of transients and their origins on the Sun is one of the most important problems of the space weather forecasting. In our work, we present a case study of the complex event consisting of several solar wind transients detected by ACE on 4 - 7 August 2011, that caused a geomagnetic storm with Dst= - 110 nT. The supposed coronal sources - three flares and coronal mass ejections (CMEs) occurred on 2 - 4 August 2011 in the active region AR 11261. To investigate the solar origins and formation of these transients, we studied kinematic and thermodynamic properties of expanding coronal structures using the SDO/AIA EUV images and the differential emission measure (DEM) diagnostics. The Helioseismic and Magnetic Imager (HMI) magnetic field maps were used as the input data for the 3D numerical model to describe the flux rope ejection. We characterize the early phase of the flux rope ejection in the corona, where the usual three-component CME structure formed. The flux rope ejected with the speed about 200 km/s to the height of 0.25 Rsun. The kinematics of the modeled CME front well agrees with the STEREO EUV measurements. Using the results of the plasma diagnostics and MHD modeling, we calculated the ion charge ratios of carbon and oxygen as well as the mean charge state of iron ions of the 2 August 2011 CME taking into account the processes of heating, cooling, expansion, ionization and recombination of the moving plasma in the corona up to the freeze-in region. We estimated a probable heating rate of the CME plasma in the low corona by matching the calculated ion composition parameters of the CME with that measured in-situ parameters of the solar wind transients. We also consider the similarities and discrepancies between the results of the MHD simulation and the observation of the event. Our results show that analysis of the ion composition of CMEs enables to disclose a relationship between parameters of the solar wind transients and properties of their solar origins, which opens new possibilities to validate and improve the solar wind forecasting models.
Real-Time Adaptive Color Segmentation by Neural Networks
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2004-01-01
Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural network and algorithm is that each update of synaptic weights takes place in conjunction with the addition of another hidden unit, which then remains in place as still other hidden units are added on subsequent iterations. For a given training pattern, the synaptic weight between (1) the inputs and the previously added hidden units and (2) the newly added hidden unit is updated by an amount proportional to the partial derivative of a quadratic error function with respect to the synaptic weight. The synaptic weight between the newly added hidden unit and each output unit is given by a more complex function that involves the errors between the outputs and their target values, the transfer functions (hyperbolic tangents) of the neural units, and the derivatives of the transfer functions.
An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems
1991-12-01
neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input
Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael
2015-01-01
Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.
NASA Astrophysics Data System (ADS)
Muinul Islam, Muhammad; Tsujikawa, Tetsuya; Mori, Tetsuya; Kiyono, Yasushi; Okazawa, Hidehiko
2017-06-01
A noninvasive method to estimate input function directly from H2 15O brain PET data for measurement of cerebral blood flow (CBF) was proposed in this study. The image derived input function (IDIF) method extracted the time-activity curves (TAC) of the major cerebral arteries at the skull base from the dynamic PET data. The extracted primordial IDIF showed almost the same radioactivity as the arterial input function (AIF) from sampled blood at the plateau part in the later phase, but significantly lower radioactivity in the initial arterial phase compared with that of AIF-TAC. To correct the initial part of the IDIF, a dispersion function was applied and two constants for the correction were determined by fitting with the individual AIF in 15 patients with unilateral arterial stenoocclusive lesions. The area under the curves (AUC) from the two input functions showed good agreement with the mean AUCIDIF/AUCAIF ratio of 0.92 ± 0.09. The final products of CBF and arterial-to-capillary vascular volume (V 0) obtained from the IDIF and AIF showed no difference, and had with high correlation coefficients.
NASA Astrophysics Data System (ADS)
Itoh, Hayato; Mori, Yuichi; Misawa, Masashi; Oda, Masahiro; Kudo, Shin-ei; Mori, Kensaku
2018-02-01
This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.
Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.
Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen
2018-01-19
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Identification of vegetable diseases using neural network
NASA Astrophysics Data System (ADS)
Zhang, Jiacai; Tang, Jianjun; Li, Yao
2007-02-01
Vegetables are widely planted all over China, but they often suffer from the some diseases. A method of major technical and economical importance is introduced in this paper, which explores the feasibility of implementing fast and reliable automatic identification of vegetable diseases and their infection grades from color and morphological features of leaves. Firstly, leaves are plucked from clustered plant and pictures of the leaves are taken with a CCD digital color camera. Secondly, color and morphological characteristics are obtained by standard image processing techniques, for examples, Otsu thresholding method segments the region of interest, image opening following closing algorithm removes noise, Principal Components Analysis reduces the dimension of the original features. Then, a recently proposed boosting algorithm AdaBoost. M2 is applied to RBF networks for diseases classification based on the above features, where the kernel function of RBF networks is Gaussian form with argument taking Euclidean distance of the input vector from a center. Our experiment performs on the database collected by Chinese Academy of Agricultural Sciences, and result shows that Boosting RBF Networks classifies the 230 cucumber leaves into 2 different diseases (downy-mildew and angular-leaf-spot), and identifies the infection grades of each disease according to the infection degrees.
Exploring the Early Organization and Maturation of Linguistic Pathways in the Human Infant Brain.
Dubois, Jessica; Poupon, Cyril; Thirion, Bertrand; Simonnet, Hina; Kulikova, Sofya; Leroy, François; Hertz-Pannier, Lucie; Dehaene-Lambertz, Ghislaine
2016-05-01
Linguistic processing is based on a close collaboration between temporal and frontal regions connected by two pathways: the "dorsal" and "ventral pathways" (assumed to support phonological and semantic processing, respectively, in adults). We investigated here the development of these pathways at the onset of language acquisition, during the first post-natal weeks, using cross-sectional diffusion imaging in 21 healthy infants (6-22 weeks of age) and 17 young adults. We compared the bundle organization and microstructure at these two ages using tractography and original clustering analyses of diffusion tensor imaging parameters. We observed structural similarities between both groups, especially concerning the dorsal/ventral pathway segregation and the arcuate fasciculus asymmetry. We further highlighted the developmental tempos of the linguistic bundles: The ventral pathway maturation was more advanced than the dorsal pathway maturation, but the latter catches up during the first post-natal months. Its fast development during this period might relate to the learning of speech cross-modal representations and to the first combinatorial analyses of the speech input. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Design concepts for an on-board coherent optical image processor
NASA Technical Reports Server (NTRS)
Husain-Abidi, A. S.
1972-01-01
On-board spacecraft image data processing systems for transmitting processed data rather than raw data are discussed. A brief history of the development of the optical data processing techniques is presented along with the conceptual design of a coherent optical system with a noncoherent image input.
Groupwise Image Registration Guided by a Dynamic Digraph of Images.
Tang, Zhenyu; Fan, Yong
2016-04-01
For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.
Jeremy S. Fried; Theresa B. Jain; Sara Loreno; Robert F. Keefe; Conor K. Bell
2017-01-01
The BioSum modeling framework summarizes current and prospective future forest conditions under alternative management regimes along with their costs, revenues and product yields. BioSum translates Forest Inventory and Analysis (FIA) data for input to the Forest Vegetation Simulator (FVS), summarizes FVS outputs for input to the treatment operations cost model (OpCost...
SU-E-J-127: Implementation of An Online Replanning Tool for VMAT Using Flattening Filter-Free Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ates, O; Ahunbay, E; Li, X
2015-06-15
Purpose: This is to report the implementation of an online replanning tool based on segment aperture morphing (SAM) for VMAT with flattening filter free (FFF) beams. Methods: Previously reported SAM algorithm modified to accommodate VMAT with FFF beams was implemented in a tool that was interfaced with a treatment planning system (Monaco, Elekta). The tool allows (1) to output the beam parameters of the original VMAT plan from Monaco, and (2) to input the apertures generated from the SAM algorithm into Monaco for the dose calculation on daily CT/CBCT/MRI in the following steps:(1) Quickly generating target contour based on themore » image of the day, using an auto-segmentation tool (ADMIRE, Elekta) with manual editing if necessary; (2) Morphing apertures based on the SAM in the original VMAT plan to account for the interfractional change of the target from the planning to the daily images; (3) Calculating dose distribution for new apertures with the same numbers of MU as in the original plan; (4) Transferring the new plan into a record & verify system (MOSAIQ, Elekta); (5) Performing a pre-delivery QA based on software; (6) Delivering the adaptive plan for the fraction.This workflow was implemented on a 16-CPU (2.6 GHz dual-core) hardware with GPU and was tested for sample cases of prostate, pancreas and lung tumors. Results: The online replanning process can be completed within 10 minutes. The adaptive plans generally have improved the plan quality when compared to the IGRT repositioning plans. The adaptive plans with FFF beams have better normal tissue sparing as compared with those of FF beams. Conclusion: The online replanning tool based on SAM can quickly generate adaptive VMAT plans using FFF beams with improved plan quality than those from the IGRT repositioning plans based on daily CT/CBCT/MRI and can be used clinically. This research was supported by Elekta Inc. (Crawley, UK)« less
Memory-Augmented Cellular Automata for Image Analysis.
1978-11-01
case in which each cell has memory size proportional to the logarithm of the input size, showing the increased capabilities of these machines for executing a variety of basic image analysis and recognition tasks. (Author)
Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Wadsak, Wolfgang; Savli, Markus; Kraus, Christoph; Birkfellner, Wolfgang; Ungersboeck, Johanna; Haeusler, Daniela; Mitterhauser, Markus; Karanikas, Georgios; Kasper, Siegfried; Frey, Richard; Lanzenberger, Rupert
2013-04-01
Image-derived input functions (IDIFs) represent a promising non-invasive alternative to arterial blood sampling for quantification in positron emission tomography (PET) studies. However, routine applications in patients and longitudinal designs are largely missing despite widespread attempts in healthy subjects. The aim of this study was to apply a previously validated approach to a clinical sample of patients with major depressive disorder (MDD) before and after electroconvulsive therapy (ECT). Eleven scans from 5 patients with venous blood sampling were obtained with the radioligand [carbonyl-(11)C]WAY-100635 at baseline, before and after 11.0±1.2 ECT sessions. IDIFs were defined by two different image reconstruction algorithms 1) OSEM with subsequent partial volume correction (OSEM+PVC) and 2) reconstruction based modelling of the point spread function (TrueX). Serotonin-1A receptor (5-HT1A) binding potentials (BPP, BPND) were quantified with a two-tissue compartment (2TCM) and reference region model (MRTM2). Compared to MRTM2, good agreement in 5-HT1A BPND was found when using input functions from OSEM+PVC (R(2)=0.82) but not TrueX (R(2)=0.57, p<0.001), which is further reflected by lower IDIF peaks for TrueX (p<0.001). Following ECT, decreased 5-HT1A BPND and BPP were found with the 2TCM using OSEM+PVC (23%-35%), except for one patient showing only subtle changes. In contrast, MRTM2 and IDIFs from TrueX gave unstable results for this patient, most probably due to a 2.4-fold underestimation of non-specific binding. Using image-derived and venous input functions defined by OSEM with subsequent PVC we confirm previously reported decreases in 5-HT1A binding in MDD patients after ECT. In contrast to reference region modeling, quantification with image-derived input functions showed consistent results in a clinical setting due to accurate modeling of non-specific binding with OSEM+PVC. Copyright © 2013 Elsevier Inc. All rights reserved.
Empirical mode decomposition-based facial pose estimation inside video sequences
NASA Astrophysics Data System (ADS)
Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing
2010-03-01
We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Altoé, I. L.; Karamitrou, A.; Ishii, M.; Ishii, H.
2015-12-01
DigitSeis is a new open-source, interactive digitization software written in MATLAB that converts digital, raster images of analog seismograms to readily usable, discretized time series using image processing algorithms. DigitSeis automatically identifies and corrects for various geometrical distortions of seismogram images that are acquired through the original recording, storage, and scanning procedures. With human supervision, the software further identifies and classifies important features such as time marks and notes, corrects time-mark offsets from the main trace, and digitizes the combined trace with an analysis to obtain as accurate timing as possible. Although a large effort has been made to minimize the human input, DigitSeis provides interactive tools for challenging situations such as trace crossings and stains in the paper. The effectiveness of the software is demonstrated with the digitization of seismograms that are over half a century old from the Harvard-Adam Dziewoński observatory that is still in operation as a part of the Global Seismographic Network (station code HRV and network code IU). The spectral analysis of the digitized time series shows no spurious features that may be related to the occurrence of minute and hour marks. They also display signals associated with significant earthquakes, and a comparison of the spectrograms with modern recordings reveals similarities in the background noise.
Local adaptive tone mapping for video enhancement
NASA Astrophysics Data System (ADS)
Lachine, Vladimir; Dai, Min (.
2015-03-01
As new technologies like High Dynamic Range cameras, AMOLED and high resolution displays emerge on consumer electronics market, it becomes very important to deliver the best picture quality for mobile devices. Tone Mapping (TM) is a popular technique to enhance visual quality. However, the traditional implementation of Tone Mapping procedure is limited by pixel's value to value mapping, and the performance is restricted in terms of local sharpness and colorfulness. To overcome the drawbacks of traditional TM, we propose a spatial-frequency based framework in this paper. In the proposed solution, intensity component of an input video/image signal is split on low pass filtered (LPF) and high pass filtered (HPF) bands. Tone Mapping (TM) function is applied to LPF band to improve the global contrast/brightness, and HPF band is added back afterwards to keep the local contrast. The HPF band may be adjusted by a coring function to avoid noise boosting and signal overshooting. Colorfulness of an original image may be preserved or enhanced by chroma components correction by means of saturation function. Localized content adaptation is further improved by dividing an image to a set of non-overlapped regions and modifying each region individually. The suggested framework allows users to implement a wide range of tone mapping applications with perceptional local sharpness and colorfulness preserved or enhanced. Corresponding hardware circuit may be integrated in camera, video or display pipeline with minimal hardware budget
A framework for activity detection in wide-area motion imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Reid B; Ruggiero, Christy E; Morrison, Jack D
2009-01-01
Wide-area persistent imaging systems are becoming increasingly cost effective and now large areas of the earth can be imaged at relatively high frame rates (1-2 fps). The efficient exploitation of the large geo-spatial-temporal datasets produced by these systems poses significant technical challenges for image and video analysis and data mining. In recent years there has been significant progress made on stabilization, moving object detection and tracking and automated systems now generate hundreds to thousands of vehicle tracks from raw data, with little human intervention. However, the tracking performance at this scale, is unreliable and average track length is much smallermore » than the average vehicle route. This is a limiting factor for applications which depend heavily on track identity, i.e. tracking vehicles from their points of origin to their final destination. In this paper we propose and investigate a framework for wide-area motion imagery (W AMI) exploitation that minimizes the dependence on track identity. In its current form this framework takes noisy, incomplete moving object detection tracks as input, and produces a small set of activities (e.g. multi-vehicle meetings) as output. The framework can be used to focus and direct human users and additional computation, and suggests a path towards high-level content extraction by learning from the human-in-the-loop.« less
Validation of a low dose simulation technique for computed tomography images.
Muenzel, Daniela; Koehler, Thomas; Brown, Kevin; Zabić, Stanislav; Fingerle, Alexander A; Waldt, Simone; Bendik, Edgar; Zahel, Tina; Schneider, Armin; Dobritz, Martin; Rummeny, Ernst J; Noël, Peter B
2014-01-01
Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT) images from an original higher dose scan. Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV) of a swine were acquired (approved by the regional governmental commission for animal protection). Simulations of CT acquisition with a lower dose (simulated 10-80 mAs) were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was -1.2% (range -9% to 3.2%) and -0.2% (range -8.2% to 3.2%), respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9-10.2 HU (noise) and 1.9-13.4 HU (CT values), without significant differences (p>0.05). Subjective observer evaluation of image appearance showed no visually detectable difference. Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques.
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
An interactive framework for acquiring vision models of 3-D objects from 2-D images.
Motai, Yuichi; Kak, Avinash
2004-02-01
This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.
Automatic Feature Extraction System.
1982-12-01
exploitation. It was used for * processing of black and white and multispectral reconnaissance photography, side-looking synthetic aperture radar imagery...the image data and different software modules for image queing and formatting, the result of the input process will be images in standard AFES file...timely manner. The FFS configuration provides the environment necessary for integrated testing of image processing functions and design and
Single-Image Super-Resolution Based on Rational Fractal Interpolation.
Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming
2018-08-01
This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.
Russian Character Recognition using Self-Organizing Map
NASA Astrophysics Data System (ADS)
Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.
2017-01-01
The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%
Optical image encryption method based on incoherent imaging and polarized light encoding
NASA Astrophysics Data System (ADS)
Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.
2018-05-01
We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.
Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2018-04-01
This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.
NASA Technical Reports Server (NTRS)
Pope, Alan T. (Inventor); Stephens, Chad L. (Inventor); Habowski, Tyler (Inventor)
2017-01-01
Method for physiologically modulating videogames and simulations includes utilizing input from a motion-sensing video game system and input from a physiological signal acquisition device. The inputs from the physiological signal sensors are utilized to change the response of a user's avatar to inputs from the motion-sensing sensors. The motion-sensing system comprises a 3D sensor system having full-body 3D motion capture of a user's body. This arrangement encourages health-enhancing physiological self-regulation skills or therapeutic amplification of healthful physiological characteristics. The system provides increased motivation for users to utilize biofeedback as may be desired for treatment of various conditions.
Analysis of ROC on chest direct digital radiography (DR) after image processing in diagnosis of SARS
NASA Astrophysics Data System (ADS)
Lv, Guozheng; Lan, Rihui; Zeng, Qingsi; Zheng, Zhong
2004-05-01
The Severe Acute Respiratory Syndrome (SARS, also called Infectious Atypical Pneumonia), which initially broke out in late 2002, has threatened the public"s health seriously. How to confirm the patients contracting SARS becomes an urgent issue in diagnosis. This paper intends to evaluate the importance of Image Processing in the diagnosis on SARS at the early stage. Receiver Operating Characteristics (ROC) analysis has been employed in this study to compare the value of DR images in the diagnosis of SARS patients before and after image processing by Symphony Software supplied by E-Com Technology Ltd., and DR image study of 72 confirmed or suspected SARS patients were reviewed respectively. All the images taken from the studied patients were processed by Symphony. Both the original and processed images were taken into ROC analysis, based on which the ROC graph for each group of images has been produced as described below: For processed images: a = 1.9745, b = 1.4275, SA = 0.8714; For original images: a = 0.9066, b = 0.8310, SA = 0.7572; (a - intercept, b - slop, SA - Area below the curve). The result shows significant difference between the original images and processed images (P<0.01). In summary, the images processed by Symphony are superior to the original ones in detecting the opacity lesion, and increases the accuracy of SARS diagnosis.
Determination of mango fruit from binary image using randomized Hough transform
NASA Astrophysics Data System (ADS)
Rizon, Mohamed; Najihah Yusri, Nurul Ain; Abdul Kadir, Mohd Fadzil; bin Mamat, Abd. Rasid; Abd Aziz, Azim Zaliha; Nanaa, Kutiba
2015-12-01
A method of detecting mango fruit from RGB input image is proposed in this research. From the input image, the image is processed to obtain the binary image using the texture analysis and morphological operations (dilation and erosion). Later, the Randomized Hough Transform (RHT) method is used to find the best ellipse fits to each binary region. By using the texture analysis, the system can detect the mango fruit that is partially overlapped with each other and mango fruit that is partially occluded by the leaves. The combination of texture analysis and morphological operator can isolate the partially overlapped fruit and fruit that are partially occluded by leaves. The parameters derived from RHT method was used to calculate the center of the ellipse. The center of the ellipse acts as the gripping point for the fruit picking robot. As the results, the rate of detection was up to 95% for fruit that is partially overlapped and partially covered by leaves.
GREAT: a gradient-based color-sampling scheme for Retinex.
Lecca, Michela; Rizzi, Alessandro; Serapioni, Raul Paolo
2017-04-01
Modeling the local color spatial distribution is a crucial step for the algorithms of the Milano Retinex family. Here we present GREAT, a novel, noise-free Milano Retinex implementation based on an image-aware spatial color sampling. For each channel of a color input image, GREAT computes a 2D set of edges whose magnitude exceeds a pre-defined threshold. Then GREAT re-scales the channel intensity of each image pixel, called target, by the average of the intensities of the selected edges weighted by a function of their positions, gradient magnitudes, and intensities relative to the target. In this way, GREAT enhances the input image, adjusting its brightness, contrast and dynamic range. The use of the edges as pixels relevant to color filtering is justified by the importance that edges play in human color sensation. The name GREAT comes from the expression "Gradient RElevAnce for ReTinex," which refers to the threshold-based definition of a gradient relevance map for edge selection and thus for image color filtering.
NASA Astrophysics Data System (ADS)
Liansheng, Sui; Yin, Cheng; Bing, Li; Ailing, Tian; Krishna Asundi, Anand
2018-07-01
A novel computational ghost imaging scheme based on specially designed phase-only masks, which can be efficiently applied to encrypt an original image into a series of measured intensities, is proposed in this paper. First, a Hadamard matrix with a certain order is generated, where the number of elements in each row is equal to the size of the original image to be encrypted. Each row of the matrix is rearranged into the corresponding 2D pattern. Then, each pattern is encoded into the phase-only masks by making use of an iterative phase retrieval algorithm. These specially designed masks can be wholly or partially used in the process of computational ghost imaging to reconstruct the original information with high quality. When a significantly small number of phase-only masks are used to record the measured intensities in a single-pixel bucket detector, the information can be authenticated without clear visualization by calculating the nonlinear correlation map between the original image and its reconstruction. The results illustrate the feasibility and effectiveness of the proposed computational ghost imaging mechanism, which will provide an effective alternative for enriching the related research on the computational ghost imaging technique.
Implementation of a watershed algorithm on FPGAs
NASA Astrophysics Data System (ADS)
Zahirazami, Shahram; Akil, Mohamed
1998-10-01
In this article we present an implementation of a watershed algorithm on a multi-FPGA architecture. This implementation is based on an hierarchical FIFO. A separate FIFO for each gray level. The gray scale value of a pixel is taken for the altitude of the point. In this way we look at the image as a relief. We proceed by a flooding step. It's like as we immerse the relief in a lake. The water begins to come up and when the water of two different catchment basins reach each other, we will construct a separator or a `Watershed'. This approach is data dependent, hence the process time is different for different images. The H-FIFO is used to guarantee the nature of immersion, it means that we need two types of priority. All the points of an altitude `n' are processed before any point of altitude `n + 1'. And inside an altitude water propagates with a constant velocity in all directions from the source. This operator needs two images as input. An original image or it's gradient and the marker image. A classic way to construct the marker image is to build an image of minimal regions. Each minimal region has it's unique label. This label is the color of the water and will be used to see whether two different water touch each other. The algorithm at first fill the hierarchy FIFO with neighbors of all the regions who are not colored. Next it fetches the first pixel from the first non-empty FIFO and treats this pixel. This pixel will take the color of its neighbor, and all the neighbors who are not already in the H-FIFO are put in their correspondent FIFO. The process is over when the H-FIFO is empty. The result is a segmented and labeled image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagesh, S Setlur; Rana, R; Russ, M
Purpose: CMOS-based aSe detectors compared to CsI-TFT-based flat panels have the advantages of higher spatial sampling due to smaller pixel size and decreased blurring characteristic of direct rather than indirect detection. For systems with such detectors, the limiting factor degrading image resolution then becomes the focal-spot geometric unsharpness. This effect can seriously limit the use of such detectors in areas such as cone beam computed tomography, clinical fluoroscopy and angiography. In this work a technique to remove the effect of focal-spot blur is presented for a simulated aSe detector. Method: To simulate images from an aSe detector affected with focal-spotmore » blur, first a set of high-resolution images of a stent (FRED from Microvention, Inc.) were acquired using a 75µm pixel size Dexela-Perkin-Elmer detector and averaged to reduce quantum noise. Then the averaged image was blurred with a known Gaussian blur at two different magnifications to simulate an idealized focal spot. The blurred images were then deconvolved with a set of different Gaussian blurs to remove the effect of focal-spot blurring using a threshold-based, inverse-filtering method. Results: The blur was removed by deconvolving the images using a set of Gaussian functions for both magnifications. Selecting the correct function resulted in an image close to the original; however, selection of too wide a function would cause severe artifacts. Conclusion: Experimentally, focal-spot blur at different magnifications can be measured using a pin hole with a high resolution detector. This spread function can be used to deblur the input images that are acquired at corresponding magnifications to correct for the focal spot blur. For CBCT applications, the magnification of specific objects can be obtained using initial reconstructions then corrected for focal-spot blurring to improve resolution. Similarly, if object magnification can be determined such correction may be applied in fluoroscopy and angiography.« less
Enhanced CT images by the wavelet transform improving diagnostic accuracy of chest nodules.
Guo, Xiuhua; Liu, Xiangye; Wang, Huan; Liang, Zhigang; Wu, Wei; He, Qian; Li, Kuncheng; Wang, Wei
2011-02-01
The objective of this study was to compare the diagnostic accuracy in the interpretation of chest nodules using original CT images versus enhanced CT images based on the wavelet transform. The CT images of 118 patients with cancers and 60 with benign nodules were used in this study. All images were enhanced through an algorithm based on the wavelet transform. Two experienced radiologists interpreted all the images in two reading sessions. The reading sessions were separated by a minimum of 1 month in order to minimize the effect of observer's recall. The Mann-Whitney U nonparametric test was used to analyze the interpretation results between original and enhanced images. The Kruskal-Wallis H nonparametric test of K independent samples was used to investigate the related factors which could affect the diagnostic accuracy of observers. The area under the ROC curves for the original and enhanced images was 0.681 and 0.736, respectively. There is significant difference in diagnosing the malignant nodules between the original and enhanced images (z = 7.122, P < 0.001), whereas there is no significant difference in diagnosing the benign nodules (z = 0.894, P = 0.371). The results showed that there is significant difference between original and enhancement images when the size of nodules was larger than 2 cm (Z = -2.509, P = 0.012, indicating the size of the nodules is a critical evaluating factor of the diagnostic accuracy of observers). This study indicated that the image enhancement based on wavelet transform could improve the diagnostic accuracy of radiologists for the malignant chest nodules.
The Teacher Observation Form: Revisions and Updates
ERIC Educational Resources Information Center
Peters, Scott J.; Gates, Jillian C.
2010-01-01
This article discusses the original development and subsequent updates and revisions made to the Teacher Observation Form (TOF). The TOF is a 12-item form to be used by evaluators in the observation of teachers of gifted and talented students. After nearly 25 years of use, the original TOF was revised based on input from content experts and…
Depth image super-resolution via semi self-taught learning framework
NASA Astrophysics Data System (ADS)
Zhao, Furong; Cao, Zhiguo; Xiao, Yang; Zhang, Xiaodi; Xian, Ke; Li, Ruibo
2017-06-01
Depth images have recently attracted much attention in computer vision and high-quality 3D content for 3DTV and 3D movies. In this paper, we present a new semi self-taught learning application framework for enhancing resolution of depth maps without making use of ancillary color images data at the target resolution, or multiple aligned depth maps. Our framework consists of cascade random forests reaching from coarse to fine results. We learn the surface information and structure transformations both from a small high-quality depth exemplars and the input depth map itself across different scales. Considering that edge plays an important role in depth map quality, we optimize an effective regularized objective that calculates on output image space and input edge space in random forests. Experiments show the effectiveness and superiority of our method against other techniques with or without applying aligned RGB information
New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database
NASA Technical Reports Server (NTRS)
Laher, Russ; Rector, John
2004-01-01
Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.
Fiber optic combiner and duplicator
NASA Technical Reports Server (NTRS)
1979-01-01
The investigation of the possible development of two optical devices, one to take two images as inputs and to present their arithmetic sum as a single output, the other to take one image as input and present two identical images as outputs is described. Significant engineering time was invested in establishing precision fiber optics drawing capabilities, real time monitoring of the fiber size and exact measuring of fiber optics ribbons. Various assembly procedures and tooling designs were investigated and prototype models were built and evaluated that established technical assurance that the device was feasible and could be fabricated. Although the interleaver specification in its entirety was not achieved, the techniques developed in the course of the program improved the quality of images transmitted by fiber optic arrays by at least an order of magnitude. These techniques are already being applied to the manufacture of precise fiber optic components.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia
2014-01-01
For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210
Spotsizer: High-throughput quantitative analysis of microbial growth.
Bischof, Leanne; Převorovský, Martin; Rallis, Charalampos; Jeffares, Daniel C; Arzhaeva, Yulia; Bähler, Jürg
2016-10-01
Microbial colony growth can serve as a useful readout in assays for studying complex genetic interactions or the effects of chemical compounds. Although computational tools for acquiring quantitative measurements of microbial colonies have been developed, their utility can be compromised by inflexible input image requirements, non-trivial installation procedures, or complicated operation. Here, we present the Spotsizer software tool for automated colony size measurements in images of robotically arrayed microbial colonies. Spotsizer features a convenient graphical user interface (GUI), has both single-image and batch-processing capabilities, and works with multiple input image formats and different colony grid types. We demonstrate how Spotsizer can be used for high-throughput quantitative analysis of fission yeast growth. The user-friendly Spotsizer tool provides rapid, accurate, and robust quantitative analyses of microbial growth in a high-throughput format. Spotsizer is freely available at https://data.csiro.au/dap/landingpage?pid=csiro:15330 under a proprietary CSIRO license.
Functional transformations of odor inputs in the mouse olfactory bulb.
Adam, Yoav; Livneh, Yoav; Miyamichi, Kazunari; Groysman, Maya; Luo, Liqun; Mizrahi, Adi
2014-01-01
Sensory inputs from the nasal epithelium to the olfactory bulb (OB) are organized as a discrete map in the glomerular layer (GL). This map is then modulated by distinct types of local neurons and transmitted to higher brain areas via mitral and tufted cells. Little is known about the functional organization of the circuits downstream of glomeruli. We used in vivo two-photon calcium imaging for large scale functional mapping of distinct neuronal populations in the mouse OB, at single cell resolution. Specifically, we imaged odor responses of mitral cells (MCs), tufted cells (TCs) and glomerular interneurons (GL-INs). Mitral cells population activity was heterogeneous and only mildly correlated with the olfactory receptor neuron (ORN) inputs, supporting the view that discrete input maps undergo significant transformations at the output level of the OB. In contrast, population activity profiles of TCs were dense, and highly correlated with the odor inputs in both space and time. Glomerular interneurons were also highly correlated with the ORN inputs, but showed higher activation thresholds suggesting that these neurons are driven by strongly activated glomeruli. Temporally, upon persistent odor exposure, TCs quickly adapted. In contrast, both MCs and GL-INs showed diverse temporal response patterns, suggesting that GL-INs could contribute to the transformations MCs undergo at slow time scales. Our data suggest that sensory odor maps are transformed by TCs and MCs in different ways forming two distinct and parallel information streams.
Comparison of gesture and conventional interaction techniques for interventional neuroradiology.
Hettig, Julian; Saalfeld, Patrick; Luz, Maria; Becker, Mathias; Skalej, Martin; Hansen, Christian
2017-09-01
Interaction with radiological image data and volume renderings within a sterile environment is a challenging task. Clinically established methods such as joystick control and task delegation can be time-consuming and error-prone and interrupt the workflow. New touchless input modalities may have the potential to overcome these limitations, but their value compared to established methods is unclear. We present a comparative evaluation to analyze the value of two gesture input modalities (Myo Gesture Control Armband and Leap Motion Controller) versus two clinically established methods (task delegation and joystick control). A user study was conducted with ten experienced radiologists by simulating a diagnostic neuroradiological vascular treatment with two frequently used interaction tasks in an experimental operating room. The input modalities were assessed using task completion time, perceived task difficulty, and subjective workload. Overall, the clinically established method of task delegation performed best under the study conditions. In general, gesture control failed to exceed the clinical input approach. However, the Myo Gesture Control Armband showed a potential for simple image selection task. Novel input modalities have the potential to take over single tasks more efficiently than clinically established methods. The results of our user study show the relevance of task characteristics such as task complexity on performance with specific input modalities. Accordingly, future work should consider task characteristics to provide a useful gesture interface for a specific use case instead of an all-in-one solution.
NASA Astrophysics Data System (ADS)
Tajik, Jehangir K.; Kugelmass, Steven D.; Hoffman, Eric A.
1993-07-01
We have developed a method utilizing x-ray CT for relating pulmonary perfusion to global and regional anatomy, allowing for detailed study of structure to function relationships. A thick slice, high temporal resolution mode is used to follow a bolus contrast agent for blood flow evaluation and is fused with a high spatial resolution, thin slice mode to obtain structure- function detail. To aid analysis of blood flow, we have developed a software module, for our image analysis package (VIDA), to produce the combined structure-function image. Color coded images representing blood flow, mean transit time, regional tissue content, regional blood volume, regional air content, etc. are generated and imbedded in the high resolution volume image. A text file containing these values along with a voxel's 3-D coordinates is also generated. User input can be minimized to identifying the location of the pulmonary artery from which the input function to a blood flow model is derived. Any flow model utilizing one input and one output function can be easily added to a user selectable list. We present examples from our physiologic based research findings to demonstrate the strengths of combining dynamic CT and HRCT relative to other scanning modalities to uniquely characterize pulmonary normal and pathophysiology.
A Simulation Model Of A Picture Archival And Communication System
NASA Astrophysics Data System (ADS)
D'Silva, Vijay; Perros, Harry; Stockbridge, Chris
1988-06-01
A PACS architecture was simulated to quantify its performance. The model consisted of reading stations, acquisition nodes, communication links, a database management system, and a storage system consisting of magnetic and optical disks. Two levels of storage were simulated, a high-speed magnetic disk system for short term storage, and optical disk jukeboxes for long term storage. The communications link was a single bus via which image data were requested and delivered. Real input data to the simulation model were obtained from surveys of radiology procedures (Bowman Gray School of Medicine). From these the following inputs were calculated: - the size of short term storage necessary - the amount of long term storage required - the frequency of access of each store, and - the distribution of the number of films requested per diagnosis. The performance measures obtained were - the mean retrieval time for an image, - mean queue lengths, and - the utilization of each device. Parametric analysis was done for - the bus speed, - the packet size for the communications link, - the record size on the magnetic disk, - compression ratio, - influx of new images, - DBMS time, and - diagnosis think times. Plots give the optimum values for those values of input speed and device performance which are sufficient to achieve subsecond image retrieval times
Origin and Ion Charge State Evolution of Solar Wind Transients during 4 - 7 August 2011
NASA Astrophysics Data System (ADS)
Rodkin, D.; Goryaev, F.; Pagano, P.; Gibb, G.; Slemzin, V.; Shugay, Y.; Veselovsky, I.; Mackay, D. H.
2017-07-01
We present a study of the complex event consisting of several solar wind transients detected by the Advanced Composition Explorer (ACE) on 4 - 7 August 2011, which caused a geomagnetic storm with Dst=-110 nT. The supposed coronal sources, three flares and coronal mass ejections (CMEs), occurred on 2 - 4 August 2011 in active region (AR) 11261. To investigate the solar origin and formation of these transients, we study the kinematic and thermodynamic properties of the expanding coronal structures using the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) EUV images and differential emission measure (DEM) diagnostics. The Helioseismic and Magnetic Imager (HMI) magnetic field maps were used as the input data for the 3D magnetohydrodynamic (MHD) model to describe the flux rope ejection (Pagano, Mackay, and Poedts, 2013b). We characterize the early phase of the flux rope ejection in the corona, where the usual three-component CME structure formed. The flux rope was ejected with a speed of about 200 km s^{-1} to the height of 0.25 R_{⊙}. The kinematics of the modeled CME front agrees well with the Solar Terrestrial Relations Observatory (STEREO) EUV measurements. Using the results of the plasma diagnostics and MHD modeling, we calculate the ion charge ratios of carbon and oxygen as well as the mean charge state of iron ions of the 2 August 2011 CME, taking into account the processes of heating, cooling, expansion, ionization, and recombination of the moving plasma in the corona up to the frozen-in region. We estimate a probable heating rate of the CME plasma in the low corona by matching the calculated ion composition parameters of the CME with those measured in situ for the solar wind transients. We also consider the similarities and discrepancies between the results of the MHD simulation and the observations.
A multispectral photon-counting double random phase encoding scheme for image authentication.
Yi, Faliu; Moon, Inkyu; Lee, Yeon H
2014-05-20
In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.
Fidelity between Gaussian mixed states with quantum state quadrature variances
NASA Astrophysics Data System (ADS)
Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao
2016-04-01
In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).
Sacrococcygeal teratoma: a literature review.
Penny, Steven M
2012-01-01
To review the current and relevant literature pertaining to the origin, imaging, and treatment for the sacrococcygeal teratoma in order to obtain information beneficial for radiologic technologists. Both peer-reviewed articles and contemporary imaging textbooks were used in the research for this review. The material was analyzed further for practical and instructive components for imaging professionals. The inquiry regarding the origin, imaging, and treatment of the sacrococcygeal teratoma yielded important facts and clinically useful information that radiologic technologists can use. Because the sacrococcygeal teratoma is the most common congenital tumor found in newborns, all imaging professionals who may be asked to actively or indirectly care for a patient diagnosed with the condition should have a fundamental knowledge of the origin, imaging, and treatment of this potentially fatal tumor.
NASA Astrophysics Data System (ADS)
Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.
2012-03-01
°For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.
Ethnicity identification from face images
NASA Astrophysics Data System (ADS)
Lu, Xiaoguang; Jain, Anil K.
2004-08-01
Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.
Improved patch-based learning for image deblurring
NASA Astrophysics Data System (ADS)
Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng
2015-05-01
Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.
Quaternion Based Thermal Condition Monitoring System
NASA Astrophysics Data System (ADS)
Wong, Wai Kit; Loo, Chu Kiong; Lim, Way Soong; Tan, Poi Ngee
In this paper, we will propose a new and effective machine condition monitoring system using log-polar mapper, quaternion based thermal image correlator and max-product fuzzy neural network classifier. Two classification characteristics namely: peak to sidelobe ratio (PSR) and real to complex ratio of the discrete quaternion correlation output (p-value) are applied in the proposed machine condition monitoring system. Large PSR and p-value observe in a good match among correlation of the input thermal image with a particular reference image, while small PSR and p-value observe in a bad/not match among correlation of the input thermal image with a particular reference image. In simulation, we also discover that log-polar mapping actually help solving rotation and scaling invariant problems in quaternion based thermal image correlation. Beside that, log-polar mapping can have a two fold of data compression capability. Log-polar mapping can help smoother up the output correlation plane too, hence makes a better measurement way for PSR and p-values. Simulation results also show that the proposed system is an efficient machine condition monitoring system with accuracy more than 98%.
Reversible watermarking for authentication of DICOM images.
Zain, J M; Baldwin, L P; Clarke, M
2004-01-01
We propose a watermarking scheme that can recover the original image from the watermarked one. The purpose is to verify the integrity and authenticity of DICOM images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the whole image is embedded in the least significant bits of the RONI (Region of Non-Interest). If the image has not been altered, the watermark will be extracted and the original image will be recovered. SHA-256 of the recovered image will be compared with the extracted watermark for authentication.
Using pyramids to define local thresholds for blob detection.
Shneier, M
1983-03-01
A method of detecting blobs in images is described. The method involves building a succession of lower resolution images and looking for spots in these images. A spot in a low resolution image corresponds to a distinguished compact region in a known position in the original image. Further, it is possible to calculate thresholds in the low resolution image, using very simple methods, and to apply those thresholds to the region of the original image corresponding to the spot. Examples are shown in which variations of the technique are applied to several images.
NASA Technical Reports Server (NTRS)
Park, Brian V. (Inventor)
1997-01-01
An immersive cyberspace system is presented which provides visual, audible, and vibrational inputs to a subject remaining in neutral immersion, and also provides for subject control input. The immersive cyberspace system includes a relaxation chair and a neutral immersion display hood. The relaxation chair supports a subject positioned thereupon, and places the subject in position which merges a neutral body position, the position a body naturally assumes in zero gravity, with a savasana yoga position. The display hood, which covers the subject's head, is configured to produce light images and sounds. An image projection subsystem provides either external or internal image projection. The display hood includes a projection screen moveably attached to an opaque shroud. A motion base supports the relaxation chair and produces vibrational inputs over a range of about 0-30 Hz. The motion base also produces limited translation and rotational movements of the relaxation chair. These limited translational and rotational movements, when properly coordinated with visual stimuli, constitute motion cues which create sensations of pitch, yaw, and roll movements. Vibration transducers produce vibrational inputs from about 20 Hz to about 150 Hz. An external computer, coupled to various components of the immersive cyberspace system, executes a software program and creates the cyberspace environment. One or more neutral hand posture controllers may be coupled to the external computer system and used to control various aspects of the cyberspace environment, or to enter data during the cyberspace experience.
Nguyen, T B; Cron, G O; Bezzina, K; Perdrizet, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Thornhill, R E; Zanette, B; Cameron, I G
2016-12-01
Tumor CBV is a prognostic and predictive marker for patients with gliomas. Tumor CBV can be measured noninvasively with different MR imaging techniques; however, it is not clear which of these techniques most closely reflects histologically-measured tumor CBV. Our aim was to investigate the correlations between dynamic contrast-enhanced and DSC-MR imaging parameters and immunohistochemistry in patients with gliomas. Forty-three patients with a new diagnosis of glioma underwent a preoperative MR imaging examination with dynamic contrast-enhanced and DSC sequences. Unnormalized and normalized cerebral blood volume was obtained from DSC MR imaging. Two sets of plasma volume and volume transfer constant maps were obtained from dynamic contrast-enhanced MR imaging. Plasma volume obtained from the phase-derived vascular input function and bookend T1 mapping (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function and bookend T1 mapping (K trans _Φ) were determined. Plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K trans _SI) were acquired, without T1 mapping. Using CD34 staining, we measured microvessel density and microvessel area within 3 representative areas of the resected tumor specimen. The Mann-Whitney U test was used to test for differences according to grade and degree of enhancement. The Spearman correlation was performed to determine the relationship between dynamic contrast-enhanced and DSC parameters and histopathologic measurements. Microvessel area, microvessel density, dynamic contrast-enhanced, and DSC-MR imaging parameters varied according to the grade and degree of enhancement (P < .05). A strong correlation was found between microvessel area and Vp_Φ and between microvessel area and unnormalized blood volume (r s ≥ 0.61). A moderate correlation was found between microvessel area and normalized blood volume, microvessel area and Vp_SI, microvessel area and K trans _Φ, microvessel area and K trans _SI, microvessel density and Vp_Φ, microvessel density and unnormalized blood volume, and microvessel density and normalized blood volume (0.44 ≤ r s ≤ 0.57). A weaker correlation was found between microvessel density and K trans _Φ and between microvessel density and K trans _SI (r s ≤ 0.41). With dynamic contrast-enhanced MR imaging, use of a phase-derived vascular input function and bookend T1 mapping improves the correlation between immunohistochemistry and plasma volume, but not between immunohistochemistry and the volume transfer constant. With DSC-MR imaging, normalization of tumor CBV could decrease the correlation with microvessel area. © 2016 by American Journal of Neuroradiology.
Anomalous neuronal responses to fluctuated inputs
NASA Astrophysics Data System (ADS)
Hosaka, Ryosuke; Sakai, Yutaka
2015-10-01
The irregular firing of a cortical neuron is thought to result from a highly fluctuating drive that is generated by the balance of excitatory and inhibitory synaptic inputs. A previous study reported anomalous responses of the Hodgkin-Huxley neuron to the fluctuated inputs where an irregularity of spike trains is inversely proportional to an input irregularity. In the current study, we investigated the origin of these anomalous responses with the Hindmarsh-Rose neuron model, map-based models, and a simple mixture of interspike interval distributions. First, we specified the parameter regions for the bifurcations in the Hindmarsh-Rose model, and we confirmed that the model reproduced the anomalous responses in the dynamics of the saddle-node and subcritical Hopf bifurcations. For both bifurcations, the Hindmarsh-Rose model shows bistability in the resting state and the repetitive firing state, which indicated that the bistability was the origin of the anomalous input-output relationship. Similarly, the map-based model that contained bistability reproduced the anomalous responses, while the model without bistability did not. These results were supported by additional findings that the anomalous responses were reproduced by mimicking the bistable firing with a mixture of two different interspike interval distributions. Decorrelation of spike trains is important for neural information processing. For such spike train decorrelation, irregular firing is key. Our results indicated that irregular firing can emerge from fluctuating drives, even weak ones, under conditions involving bistability. The anomalous responses, therefore, contribute to efficient processing in the brain.
Fourier removal of stripe artifacts in IRAS images
NASA Technical Reports Server (NTRS)
Van Buren, Dave
1987-01-01
By working in the Fourier plane, approximate removal of stripe artifacts in IRAS images can be effected. The image of interest is smoothed and subtracted from the original, giving the high-spatial-frequency part. This 'filtered' image is then clipped to remove point sources and then Fourier transformed. Subtracting the Fourier components contributing to the stripes in this image from the Fourier transform of the original and transforming back to the image plane yields substantial removal of the stripes.
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-07-20
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Achromatical Optical Correlator
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Liu, Hua-Kuang
1989-01-01
Signal-to-noise ratio exceeds that of monochromatic correlator. Achromatical optical correlator uses multiple-pinhole diffraction of dispersed white light to form superposed multiple correlations of input and reference images in output plane. Set of matched spatial filters made by multiple-exposure holographic process, each exposure using suitably-scaled input image and suitable angle of reference beam. Recording-aperture mask translated to appropriate horizontal position for each exposure. Noncoherent illumination suitable for applications involving recognition of color and determination of scale. When fully developed achromatical correlators will be useful for recognition of patterns; for example, in industrial inspection and search for selected features in aerial photographs.
Deep architecture neural network-based real-time image processing for image-guided radiotherapy.
Mori, Shinichiro
2017-08-01
To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Li, Shengwen Calvin; Kabeer, Mustafa H
2018-02-26
Pediatric origin of cancer stem cell hypothesis holds great promise and potential in adult cancer treatment, however; the road to innovation is full of obstacles as there are plenty of questions left unanswered. First, the key question is to characterize the nature of such stem cells (concept). Second, the quantitative imaging of pediatric stem cells should be implemented (technology). Conceptually, pediatric stem cell origins of adult cancer are based on the notion that plasticity in early life developmental programming evolves local environments to cancer. Technologically, such imaging in children is lacking as all imaging is designed for adult patients. We postulate that the need for quantitative imaging to measure space-time changes of plasticity in early life developmental programming in children may trigger research and development of the imaging technology. Such quantitative imaging of pediatric origin of adulthood cancer will help develop a spatiotemporal monitoring system to determine cancer initiation and progression. Clinical validation of such speculative hypothesis-that cancer originates in a pediatric environment-will help implement a wait-and-watch strategy for cancer treatment.
Hwang, Min Gu; Har, Dong Hwan
2017-11-01
This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.
Neural networks applied to discriminate botanical origin of honeys.
Anjos, Ofélia; Iglesias, Carla; Peres, Fátima; Martínez, Javier; García, Ángela; Taboada, Javier
2015-05-15
The aim of this work is develop a tool based on neural networks to predict the botanical origin of honeys using physical and chemical parameters. The managed database consists of 49 honey samples of 2 different classes: monofloral (almond, holm oak, sweet chestnut, eucalyptus, orange, rosemary, lavender, strawberry trees, thyme, heather, sunflower) and multifloral. The moisture content, electrical conductivity, water activity, ashes content, pH, free acidity, colorimetric coordinates in CIELAB space (L(∗), a(∗), b(∗)) and total phenols content of the honey samples were evaluated. Those properties were considered as input variables of the predictive model. The neural network is optimised through several tests with different numbers of neurons in the hidden layer and also with different input variables. The reduced error rates (5%) allow us to conclude that the botanical origin of honey can be reliably and quickly known from the colorimetric information and the electrical conductivity of honey. Copyright © 2014 Elsevier Ltd. All rights reserved.
Research on Image Encryption Based on DNA Sequence and Chaos Theory
NASA Astrophysics Data System (ADS)
Tian Zhang, Tian; Yan, Shan Jun; Gu, Cheng Yan; Ren, Ran; Liao, Kai Xin
2018-04-01
Nowadays encryption is a common technique to protect image data from unauthorized access. In recent years, many scientists have proposed various encryption algorithms based on DNA sequence to provide a new idea for the design of image encryption algorithm. Therefore, a new method of image encryption based on DNA computing technology is proposed in this paper, whose original image is encrypted by DNA coding and 1-D logistic chaotic mapping. First, the algorithm uses two modules as the encryption key. The first module uses the real DNA sequence, and the second module is made by one-dimensional logistic chaos mapping. Secondly, the algorithm uses DNA complementary rules to encode original image, and uses the key and DNA computing technology to compute each pixel value of the original image, so as to realize the encryption of the whole image. Simulation results show that the algorithm has good encryption effect and security.
Sitnikova, Tatiana; Rosen, Bruce R.; Lord, Louis-David; West, W. Caroline
2014-01-01
Adaptive, original actions, which can succeed in multiple contextual situations, require understanding of what is relevant to a goal. Recognizing what is relevant may also help in predicting kinematics of observed, original actions. During action observation, comparisons between sensory input and expected action kinematics have been argued critical to accurate goal inference. Experimental studies with laboratory tasks, both in humans and nonhuman primates, demonstrated that the lateral prefrontal cortex (LPFC) can learn, hierarchically organize, and use goal-relevant information. To determine whether this LPFC capacity is generalizable to real-world cognition, we recorded functional magnetic resonance imaging (fMRI) data in the human brain during comprehension of original and usual object-directed actions embedded in video-depictions of real-life behaviors. We hypothesized that LPFC will contribute to forming goal-relevant representations necessary for kinematic predictions of original actions. Additionally, resting-state fMRI was employed to examine functional connectivity between the brain regions delineated in the video fMRI experiment. According to behavioral data, original videos could be understood by identifying elements relevant to real-life goals at different levels of abstraction. Patterns of enhanced activity in four regions in the left LPFC, evoked by original, relative to usual, video scenes, were consistent with previous neuroimaging findings on representing abstract and concrete stimuli dimensions relevant to laboratory goals. In the anterior left LPFC, the activity increased selectively when representations of broad classes of objects and actions, which could achieve the perceived overall behavioral goal, were likely to bias kinematic predictions of original actions. In contrast, in the more posterior regions, the activity increased even when concrete properties of the target object were more likely to bias the kinematic prediction. Functional connectivity was observed between contiguous regions along the rostro-caudal LPFC axis, but not between the regions that were not immediately adjacent. These findings generalize the representational hierarchy account of LPFC function to diverse core principles that can govern both production and comprehension of flexible real-life behavior. PMID:25224997
40 CFR 98.234 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... In addition, you must operate the optical gas imaging instrument to image the source types required... input or output, or volume of fuel used in paragraphs § 98.233(d), (e), (j), (k), (l), (m), (n), (x), (y...
Evaluation of a video image detection system : final report.
DOT National Transportation Integrated Search
1994-05-01
A video image detection system (VIDS) is an advanced wide-area traffic monitoring system : that processes input from a video camera. The Autoscope VIDS coupled with an information : management system was selected as the monitoring device because test...
Optical correlators for automated rendezvous and capture
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1991-01-01
The paper begins with a description of optical correlation. In this process, the propagation physics of coherent light is used to process images and extract information. The processed image is operated on as an area, rather than as a collection of points. An essentially instantaneous convolution is performed on that image to provide the sensory data. In this process, an image is sensed and encoded onto a coherent wavefront, and the propagation is arranged to create a bright spot of the image to match a model of the desired object. The brightness of the spot provides an indication of the degree of resemblance of the viewed image to the mode, and the location of the bright spot provides pointing information. The process can be utilized for AR&C to achieve the capability to identify objects among known reference types, estimate the object's location and orientation, and interact with the control system. System characteristics (speed, robustness, accuracy, small form factors) are adequate to meet most requirements. The correlator exploits the fact that Bosons and Fermions pass through each other. Since the image source is input as an electronic data set, conventional imagers can be used. In systems where the image is input directly, the correlating element must be at the sensing location.
A Unified Framework for Street-View Panorama Stitching
Li, Li; Yao, Jian; Xie, Renping; Xia, Menghan; Zhang, Wei
2016-01-01
In this paper, we propose a unified framework to generate a pleasant and high-quality street-view panorama by stitching multiple panoramic images captured from the cameras mounted on the mobile platform. Our proposed framework is comprised of four major steps: image warping, color correction, optimal seam line detection and image blending. Since the input images are captured without a precisely common projection center from the scenes with the depth differences with respect to the cameras to different extents, such images cannot be precisely aligned in geometry. Therefore, an efficient image warping method based on the dense optical flow field is proposed to greatly suppress the influence of large geometric misalignment at first. Then, to lessen the influence of photometric inconsistencies caused by the illumination variations and different exposure settings, we propose an efficient color correction algorithm via matching extreme points of histograms to greatly decrease color differences between warped images. After that, the optimal seam lines between adjacent input images are detected via the graph cut energy minimization framework. At last, the Laplacian pyramid blending algorithm is applied to further eliminate the stitching artifacts along the optimal seam lines. Experimental results on a large set of challenging street-view panoramic images captured form the real world illustrate that the proposed system is capable of creating high-quality panoramas. PMID:28025481
Hybrid active contour model for inhomogeneous image segmentation with background estimation
NASA Astrophysics Data System (ADS)
Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun
2018-03-01
This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.
Natural uranium and thorium isotopes in sediment cores off Malaysian ports
NASA Astrophysics Data System (ADS)
Yusoff, Abdul Hafidz; Sabuti, Asnor Azrin; Mohamed, Che Abd Rahim
2015-06-01
Sediment cores collected from three Malaysian marine ports, namely, Kota Kinabalu, Labuan and Klang were analyzed to determine the radioactivities of 234U, 238U, 230Th, 232Th and total organic carbon (TOC) content. The objectives of this study were to determine the factors that control the activity of uranium isotopes and identify the possible origin of uranium and thorium in these areas. The activities of 234U and 238U show high positive correlation with TOC at the middle of sediment core from Kota Kinabalu port. This result suggests that activity of uranium at Kota Kinabalu port was influenced by organic carbon. The 234U/238U value at the upper layer of Kota Kinabalu port was ≥1.14 while the ratio value at Labuan and Klang port was ≤ 1.14. These results suggest a reduction process occurred at Kota Kinabalu port where mobile U(VI) was converted to immobile U(IV) by organic carbon. Therefore, it can be concluded that the major input of uranium at Kota Kinabalu port is by sorptive uptake of authigenic uranium from the water column whereas the major inputs of uranium to Labuan and Klang port are of detrital origin. The ratio of 230Th/232Th was used to estimate the origin of thorium. Low ratio value (lt; 1.5) at Labuan and Klang ports support the suggestion that thorium from both areas were come from detrital input while the high ratio (> 1.5) of 230Th/232Th at Kota Kinabalu port suggest the anthropogenic input of 230Th to this area. The source of 230Th is probably from phosphate fertilizers used in the oil-palm cultivation in Kota Kinabalu that is adjacent to the Kota Kinabalu port.
A design of real time image capturing and processing system using Texas Instrument's processor
NASA Astrophysics Data System (ADS)
Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng
2007-09-01
In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.
Validation of a Low Dose Simulation Technique for Computed Tomography Images
Muenzel, Daniela; Koehler, Thomas; Brown, Kevin; Žabić, Stanislav; Fingerle, Alexander A.; Waldt, Simone; Bendik, Edgar; Zahel, Tina; Schneider, Armin; Dobritz, Martin; Rummeny, Ernst J.; Noël, Peter B.
2014-01-01
Purpose Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT) images from an original higher dose scan. Materials and Methods Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV) of a swine were acquired (approved by the regional governmental commission for animal protection). Simulations of CT acquisition with a lower dose (simulated 10–80 mAs) were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. Results Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was −1.2% (range −9% to 3.2%) and −0.2% (range −8.2% to 3.2%), respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9–10.2 HU (noise) and 1.9–13.4 HU (CT values), without significant differences (p>0.05). Subjective observer evaluation of image appearance showed no visually detectable difference. Conclusion Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques. PMID:25247422
A convergent and essential interneuron pathway for Mauthner-cell-mediated escapes.
Lacoste, Alix M B; Schoppik, David; Robson, Drew N; Haesemeyer, Martin; Portugues, Ruben; Li, Jennifer M; Randlett, Owen; Wee, Caroline L; Engert, Florian; Schier, Alexander F
2015-06-01
The Mauthner cell (M-cell) is a command-like neuron in teleost fish whose firing in response to aversive stimuli is correlated with short-latency escapes [1-3]. M-cells have been proposed as evolutionary ancestors of startle response neurons of the mammalian reticular formation [4], and studies of this circuit have uncovered important principles in neurobiology that generalize to more complex vertebrate models [3]. The main excitatory input was thought to originate from multisensory afferents synapsing directly onto the M-cell dendrites [3]. Here, we describe an additional, convergent pathway that is essential for the M-cell-mediated startle behavior in larval zebrafish. It is composed of excitatory interneurons called spiral fiber neurons, which project to the M-cell axon hillock. By in vivo calcium imaging, we found that spiral fiber neurons are active in response to aversive stimuli capable of eliciting escapes. Like M-cell ablations, bilateral ablations of spiral fiber neurons largely eliminate short-latency escapes. Unilateral spiral fiber neuron ablations shift the directionality of escapes and indicate that spiral fiber neurons excite the M-cell in a lateralized manner. Their optogenetic activation increases the probability of short-latency escapes, supporting the notion that spiral fiber neurons help activate M-cell-mediated startle behavior. These results reveal that spiral fiber neurons are essential for the function of the M-cell in response to sensory cues and suggest that convergent excitatory inputs that differ in their input location and timing ensure reliable activation of the M-cell, a feedforward excitatory motif that may extend to other neural circuits. Copyright © 2015 Elsevier Ltd. All rights reserved.
Deng, Rongkang; Kao, Joseph P Y; Kanold, Patrick O
2017-05-09
GABAergic activity is important in neocortical development and plasticity. Because the maturation of GABAergic interneurons is regulated by neural activity, the source of excitatory inputs to GABAergic interneurons plays a key role in development. We show, by laser-scanning photostimulation, that layer 4 and layer 5 GABAergic interneurons in the auditory cortex in neonatal mice (
NASA Technical Reports Server (NTRS)
Helder, Dennis; Choi, Taeyoung; Rangaswamy, Manjunath
2005-01-01
The spatial characteristics of an imaging system cannot be expressed by a single number or simple statement. However, the Modulation Transfer Function (MTF) is one approach to measure the spatial quality of an imaging system. Basically, MTF is the normalized spatial frequency response of an imaging system. The frequency response of the system can be evaluated by applying an impulse input. The resulting impulse response is termed the Point Spread function (PSF). This function is a measure of the amount of blurring present in the imaging system and is itself a useful measure of spatial quality. An underlying assumption is that the imaging system is linear and shift-independent. The Fourier transform of the PSF is called the Optical Transfer Function (OTF) and the normalized magnitude of the OTF is the MTF. In addition to using an impulse input, a knife-edge in technique has also been used in this project. The sharp edge exercises an imaging system at all spatial frequencies. The profile of an edge response from an imaging system is called an Edge Spread Function (ESF). Differentiation of the ESF results in a one-dimensional version of the Point Spread Function (PSF). Finally, MTF can be calculated through use of Fourier transform of the PSF as stated previously. Every image includes noise in some degree which makes MTF of PSF estimation more difficult. To avoid the noise effects, many MTF estimation approaches use smooth numerical models. Historically, Gaussian models and Fermi functions were applied to reduce the random noise in the output profiles. The pulse-input method was used to measure the MTF of the Landsat Thematic Mapper (TM) using 8th order even functions over the San Mateo Bridge in San Francisco, California. Because the bridge width was smaller than the 30-meter ground sample distance (GSD) of the TM, the Nyquist frequency was located before the first zero-crossing point of the sinc function from the Fourier transformation of the bridge pulse. To avoid the zero-crossing points in the frequency domain from a pulse, the pulse width should be less than the width of two pixels (or 2 GSD's), but the short extent of the pulse results in a poor signal-to-noise ratio. Similarly, for a high-resolution satellite imaging system such as Quickbird, the input pulse width was critical because of the zero crossing points and noise present in the background area. It is important, therefore, that the width of the input pulse be appropriately sized. Finally, the MTF was calculated by taking ratio between Fourier transform of output and Fourier transform of input. Regardless of whether the edge, pulse and impulse target method is used, the orientation of the targets is critical in order to obtain uniformly spaced sub-pixel data points. When the orientation is incorrect, sample data points tend to be located in clusters that result in poor reconstruction of the edge or pulse profiles. Thus, a compromise orientation must be selected so that all spectral bands can be accommodated. This report continues by outlining the objectives in Section 2, procedures followed in Section 3, descriptions of the field campaigns in Section 4, results in Section 5, and a brief summary in Section 6.
Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor
Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung
2018-01-01
Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies. PMID:29695113
Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor.
Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung
2018-04-24
Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.
JIP: Java image processing on the Internet
NASA Astrophysics Data System (ADS)
Wang, Dongyan; Lin, Bo; Zhang, Jun
1998-12-01
In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.
Ning, Jia; Sun, Yongliang; Xie, Sheng; Zhang, Bida; Huang, Feng; Koken, Peter; Smink, Jouke; Yuan, Chun; Chen, Huijun
2018-05-01
To propose a simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) method for liver dynamic contrast-enhanced MRI. The proposed SAHA simultaneously acquired high temporal-resolution 2D images for vascular input function extraction using Cartesian sampling and 3D large-coverage high spatial-resolution liver dynamic contrast-enhanced images using golden angle stack-of-stars acquisition in an interleaved way. Simulations were conducted to investigate the accuracy of SAHA in pharmacokinetic analysis. A healthy volunteer and three patients with cirrhosis or hepatocellular carcinoma were included in the study to investigate the feasibility of SAHA in vivo. Simulation studies showed that SAHA can provide closer results to the true values and lower root mean square error of estimated pharmacokinetic parameters in all of the tested scenarios. The in vivo scans of subjects provided fair image quality of both 2D images for arterial input function and portal venous input function and 3D whole liver images. The in vivo fitting results showed that the perfusion parameters of healthy liver were significantly different from those of cirrhotic liver and HCC. The proposed SAHA can provide improved accuracy in pharmacokinetic modeling and is feasible in human liver dynamic contrast-enhanced MRI, suggesting that SAHA is a potential tool for liver dynamic contrast-enhanced MRI. Magn Reson Med 79:2629-2641, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Rakszegi, Marianna; Löschenberger, Franziska; Hiltbrunner, Jürg; Vida, Gyula; Mikó, Péter
2016-06-01
An assessment was previously made of the effects of organic and low-input field management systems on the physical, grain compositional and processing quality of wheat and on the performance of varieties developed using different breeding methods ("Comparison of quality parameters of wheat varieties with different breeding origin under organic and low-input conventional conditions" [1]). Here, accompanying data are provided on the performance and stability analysis of the genotypes using the coefficient of variation and the 'ranking' and 'which-won-where' plots of GGE biplot analysis for the most important quality traits. Broad-sense heritability was also evaluated and is given for the most important physical and quality properties of the seed in organic and low-input management systems, while mean values and standard deviation of the studied properties are presented separately for organic and low-input fields.
Reconstruction of nonlinear wave propagation
Fleischer, Jason W; Barsi, Christopher; Wan, Wenjie
2013-04-23
Disclosed are systems and methods for characterizing a nonlinear propagation environment by numerically propagating a measured output waveform resulting from a known input waveform. The numerical propagation reconstructs the input waveform, and in the process, the nonlinear environment is characterized. In certain embodiments, knowledge of the characterized nonlinear environment facilitates determination of an unknown input based on a measured output. Similarly, knowledge of the characterized nonlinear environment also facilitates formation of a desired output based on a configurable input. In both situations, the input thus characterized and the output thus obtained include features that would normally be lost in linear propagations. Such features can include evanescent waves and peripheral waves, such that an image thus obtained are inherently wide-angle, farfield form of microscopy.
Edge enhancement of color images using a digital micromirror device.
Di Martino, J Matías; Flores, Jorge L; Ayubi, Gastón A; Alonso, Julia R; Fernández, Ariel; Ferrari, José A
2012-06-01
A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.
Modified-BRISQUE as no reference image quality assessment for structural MR images.
Chow, Li Sze; Rajagopal, Heshalini
2017-11-01
An effective and practical Image Quality Assessment (IQA) model is needed to assess the image quality produced from any new hardware or software in MRI. A highly competitive No Reference - IQA (NR - IQA) model called Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) initially designed for natural images were modified to evaluate structural MR images. The BRISQUE model measures the image quality by using the locally normalized luminance coefficients, which were used to calculate the image features. The modified-BRISQUE model trained a new regression model using MR image features and Difference Mean Opinion Score (DMOS) from 775 MR images. Two types of benchmarks: objective and subjective assessments were used as performance evaluators for both original and modified-BRISQUE models. There was a high correlation between the modified-BRISQUE with both benchmarks, and they were higher than those for the original BRISQUE. There was a significant percentage improvement in their correlation values. The modified-BRISQUE was statistically better than the original BRISQUE. The modified-BRISQUE model can accurately measure the image quality of MR images. It is a practical NR-IQA model for MR images without using reference images. Copyright © 2017 Elsevier Inc. All rights reserved.
Short-Term Memory Trace in Rapidly Adapting Synapses of Inferior Temporal Cortex
Sugase-Miyamoto, Yasuko; Liu, Zheng; Wiener, Matthew C.; Optican, Lance M.; Richmond, Barry J.
2008-01-01
Visual short-term memory tasks depend upon both the inferior temporal cortex (ITC) and the prefrontal cortex (PFC). Activity in some neurons persists after the first (sample) stimulus is shown. This delay-period activity has been proposed as an important mechanism for working memory. In ITC neurons, intervening (nonmatching) stimuli wipe out the delay-period activity; hence, the role of ITC in memory must depend upon a different mechanism. Here, we look for a possible mechanism by contrasting memory effects in two architectonically different parts of ITC: area TE and the perirhinal cortex. We found that a large proportion (80%) of stimulus-selective neurons in area TE of macaque ITCs exhibit a memory effect during the stimulus interval. During a sequential delayed matching-to-sample task (DMS), the noise in the neuronal response to the test image was correlated with the noise in the neuronal response to the sample image. Neurons in perirhinal cortex did not show this correlation. These results led us to hypothesize that area TE contributes to short-term memory by acting as a matched filter. When the sample image appears, each TE neuron captures a static copy of its inputs by rapidly adjusting its synaptic weights to match the strength of their individual inputs. Input signals from subsequent images are multiplied by those synaptic weights, thereby computing a measure of the correlation between the past and present inputs. The total activity in area TE is sufficient to quantify the similarity between the two images. This matched filter theory provides an explanation of what is remembered, where the trace is stored, and how comparison is done across time, all without requiring delay period activity. Simulations of a matched filter model match the experimental results, suggesting that area TE neurons store a synaptic memory trace during short-term visual memory. PMID:18464917
NASA Astrophysics Data System (ADS)
Frommholz, D.; Linkiewicz, M.; Poznanska, A. M.
2016-06-01
This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for façade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the façades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained façade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and conditionally grown, fused and filtered morphologically. The output polygons are vectorized and reintegrated into the previously reconstructed buildings by sparsely ray-tracing their vertices. Finally the enhanced 3D models get stored as textured geometry for visualization and semantically annotated "LOD-2.5" CityGML objects for GIS applications.
Tao, Qian; Milles, Julien; VAN Huls VAN Taxis, Carine; Lamb, Hildo J; Reiber, Johan H C; Zeppenfeld, Katja; VAN DER Geest, Rob J
2012-01-01
Integration of preprocedural delayed enhanced magnetic resonance imaging (DE-MRI) with electroanatomical voltage mapping (EAVM) may provide additional high-resolution substrate information for catheter ablation of scar-related ventricular tachycardias (VT). Accurate and fast image integration of DE-MRI with EAVM is desirable for MR-guided ablation. Twenty-six VT patients with large transmural scar underwent catheter ablation and preprocedural DE-MRI. With different registration models and EAVM input, 3 image integration methods were evaluated and compared to the commercial registration module CartoMerge. The performance was evaluated both in terms of distance measure that describes surface matching, and correlation measure that describes actual scar correspondence. Compared to CartoMerge, the method that uses the translation-and-rotation model and high-density EAVM input resulted in a registration error of 4.32±0.69 mm as compared to 4.84 ± 1.07 (P <0.05); the method that uses the translation model and high-density EAVM input resulted in a registration error of 4.60 ± 0.65 mm (P = NS); and the method that uses the translation model and a single anatomical landmark input resulted in a registration error of 6.58 ± 1.63 mm (P < 0.05). No significant difference in scar correlation was observed between all 3 methods and CartoMerge (P = NS). During VT ablation procedures, accurate integration of EAVM and DE-MRI can be achieved using a translation registration model and a single anatomical landmark. This model allows for image integration in minimal mapping time and is likely to reduce fluoroscopy time and increase procedure efficacy. © 2011 Wiley Periodicals, Inc.
Image-derived input function with factor analysis and a-priori information.
Simončič, Urban; Zanotti-Fregonara, Paolo
2015-02-01
Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.
NASA Astrophysics Data System (ADS)
Trusiak, M.; Patorski, K.; Tkaczyk, T.
2014-12-01
We propose a fast, simple and experimentally robust method for reconstructing background-rejected optically-sectioned microscopic images using two-shot structured illumination approach. Innovative data demodulation technique requires two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement value is not critical. Upon subtraction of the two frames the input pattern with increased grid modulation is computed. The proposed demodulation procedure comprises: (1) two-dimensional data processing based on the enhanced, fast empirical mode decomposition (EFEMD) method for the object spatial frequency selection (noise reduction and bias term removal), and (2) calculating high contrast optically-sectioned image using the two-dimensional spiral Hilbert transform (HS). The proposed algorithm effectiveness is compared with the results obtained for the same input data using conventional structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. In comparison with the conventional three-frame SIM technique we need one frame less and no stringent requirement on the exact phase-shift between recorded frames is imposed. The HiLo algorithm outcome is strongly dependent on the set of parameters chosen manually by the operator (cut-off frequencies for low-pass and high-pass filtering and η parameter value for optically-sectioned image reconstruction) whereas the proposed method is parameter-free. Moreover very short processing time required to efficiently demodulate the input pattern predestines proposed method for real-time in-vivo studies. Current implementation completes full processing in 0.25s using medium class PC (Inter i7 2,1 GHz processor and 8 GB RAM). Simple modification employed to extract only first two BIMFs with fixed filter window size results in reducing the computing time to 0.11s (8 frames/s).
Characterization of random scattering media and related information retrieval
NASA Astrophysics Data System (ADS)
Wang, Zhenyu
There has been substantial interest in optical imaging in and through random media in applications as diverse as environmental sensing and tumor detection. The rich scatter environment also leads to multiple paths or channels, which may provide higher capacity for communication. Coherent light passing through random media produces an intensity speckle pattern when imaged, as a result of multiple scatter and the imaging optics. When polarized coherent light is used, the speckle pattern is sensitive to the polarization state, depending on the amount of scatter, and such measurements provide information about the random medium. This may form the basis for enhanced imaging of random media and provide information on the scatterers themselves. Second and third order correlations over laser scan frequency are shown to lead to the ensemble averaged temporal impulse response, with sensitivity to the polarization state in the more weakly scattering regime. A new intensity interferometer is introduced that provides information about two signals incident on a scattering medium. The two coherent beams, which are not necessarily overlapping, interfere in a scattering medium. A sinusoidal modulation in the second order intensity correlation with laser scan frequency is shown to be related to the relative delay of the two incident beams. An intensity spatial correlation over input position reveals that decorrelation occurs over a length comparable to the incident beam size. Such decorrelation is also related to the amount of scatter. Remarkably, with two beams incident at different angles, the intensity correlation over the scan position has a sinusoidal modulation that is related to the incidence angle difference between the two input beams. This spatial correlation over input position thus provides information about input wavevectors.
Image thumbnails that represent blur and noise.
Samadani, Ramin; Mauer, Timothy A; Berfanger, David M; Clark, James H
2010-02-01
The information about the blur and noise of an original image is lost when a standard image thumbnail is generated by filtering and subsampling. Image browsing becomes difficult since the standard thumbnails do not distinguish between high-quality and low-quality originals. In this paper, an efficient algorithm with a blur-generating component and a noise-generating component preserves the local blur and the noise of the originals. The local blur is rapidly estimated using a scale-space expansion of the standard thumbnail and subsequently used to apply a space-varying blur to the thumbnail. The noise is estimated and rendered by using multirate signal transformations that allow most of the processing to occur at the lower spatial sampling rate of the thumbnail. The new thumbnails provide a quick, natural way for users to identify images of good quality. A subjective evaluation shows the new thumbnails are more representative of their originals for blurry images. The noise generating component improves the results for noisy images, but degrades the results for textured images. The blur generating component of the new thumbnails may always be used to advantage. The decision to use the noise generating component of the new thumbnails should be based on testing with the particular image mix expected for the application.
Stergiadis, Sokratis; Bieber, Anna; Chatzidimitriou, Eleni; Franceschin, Enrica; Isensee, Anne; Rempelos, Leonidas; Baranski, Marcin; Maurer, Veronika; Cozzi, Giulio; Bapst, Beat; Butler, Gillian; Leifert, Carlo
2018-06-15
This study investigated the effect of, and interactions between, US Brown Swiss (BS) genetics and season on milk yield, basic composition and fatty acid profiles, from cows on low-input farms in Switzerland. Milk samples (n = 1,976) were collected from 1,220 crossbreed cows with differing proportions of BS, Braunvieh and Original Braunvieh genetics on 40 farms during winter-housing and summer-grazing. Cows with more BS genetics produced more milk in winter but not in summer, possibly because of underfeeding potentially high-yielding cows on low-input pasture-based diets. Cows with more Original Braunvieh genetics produced milk with more (i) nutritionally desirable eicosapentaenoic and docosapentaenoic acids, throughout the year, and (ii) vaccenic and α-linolenic acids, total omega-3 fatty acid concentrations and a higher omega-3/omega-6 ratio only during summer-grazing. This suggests that overall milk quality could be improved by re-focussing breeding strategies on cows' ability to respond to local dietary environments and seasonal dietary changes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Building Extraction from Remote Sensing Data Using Fully Convolutional Networks
NASA Astrophysics Data System (ADS)
Bittner, K.; Cui, S.; Reinartz, P.
2017-05-01
Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.
Image-based information, communication, and retrieval
NASA Technical Reports Server (NTRS)
Bryant, N. A.; Zobrist, A. L.
1980-01-01
IBIS/VICAR system combines video image processing and information management. Flexible programs require user to supply only parameters specific to particular application. Special-purpose input/output routines transfer image data with reduced memory requirements. New application programs are easily incorporated. Program is written in FORTRAN IV, Assembler, and OS JCL for batch execution and has been implemented on IBM 360.
Integrated infrared and visible image sensors
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)
2000-01-01
Semiconductor imaging devices integrating an array of visible detectors and another array of infrared detectors into a single module to simultaneously detect both the visible and infrared radiation of an input image. The visible detectors and the infrared detectors may be formed either on two separate substrates or on the same substrate by interleaving visible and infrared detectors.
Menzel, Claudia; Kovács, Gyula; Amado, Catarina; Hayn-Leichsenring, Gregor U; Redies, Christoph
2018-05-06
In complex abstract art, image composition (i.e., the artist's deliberate arrangement of pictorial elements) is an important aesthetic feature. We investigated whether the human brain detects image composition in abstract artworks automatically (i.e., independently of the experimental task). To this aim, we studied whether a group of 20 original artworks elicited a visual mismatch negativity when contrasted with a group of 20 images that were composed of the same pictorial elements as the originals, but in shuffled arrangements, which destroy artistic composition. We used a passive oddball paradigm with parallel electroencephalogram recordings to investigate the detection of image type-specific properties. We observed significant deviant-standard differences for the shuffled and original images, respectively. Furthermore, for both types of images, differences in amplitudes correlated with the behavioral ratings of the images. In conclusion, we show that the human brain can detect composition-related image properties in visual artworks in an automatic fashion. Copyright © 2018 Elsevier B.V. All rights reserved.
A New Quantum Gray-Scale Image Encoding Scheme
NASA Astrophysics Data System (ADS)
Naseri, Mosayeb; Abdolmaleky, Mona; Parandin, Fariborz; Fatahi, Negin; Farouk, Ahmed; Nazari, Reza
2018-02-01
In this paper, a new quantum images encoding scheme is proposed. The proposed scheme mainly consists of four different encoding algorithms. The idea behind of the scheme is a binary key generated randomly for each pixel of the original image. Afterwards, the employed encoding algorithm is selected corresponding to the qubit pair of the generated randomized binary key. The security analysis of the proposed scheme proved its enhancement through both randomization of the generated binary image key and altering the gray-scale value of the image pixels using the qubits of randomized binary key. The simulation of the proposed scheme assures that the final encoded image could not be recognized visually. Moreover, the histogram diagram of encoded image is flatter than the original one. The Shannon entropies of the final encoded images are significantly higher than the original one, which indicates that the attacker can not gain any information about the encoded images. Supported by Kermanshah Branch, Islamic Azad University, Kermanshah, IRAN
TH-A-BRF-11: Image Intensity Non-Uniformities Between MRI Simulation and Diagnostic MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, E
2014-06-15
Purpose: MRI simulation for MRI-based radiotherapy demands that patients be setup in treatment position, which frequently involves use of alternative radiofrequency (RF) coil configurations to accommodate immobilized patients. However, alternative RF coil geometries may exacerbate image intensity non-uniformities (IINU) beyond those observed in diagnostic MRI, which may challenge image segmentation and registration accuracy as well as confound studies assessing radiotherapy response when MR simulation images are used as baselines for evaluation. The goal of this work was to determine whether differences in IINU exist between MR simulation and diagnostic MR images. Methods: ACR-MRI phantom images were acquired at 3T usingmore » a spin-echo sequence (TE/TR:20/500ms, rBW:62.5kHz, TH/skip:5/5mm). MR simulation images were obtained by wrapping two flexible phased-array RF coils around the phantom. Diagnostic MR images were obtained by placing the phantom into a commercial phased-array head coil. Pre-scan normalization was enabled in both cases. Images were transferred offline and corrected for IINU using the MNI N3 algorithm. Coefficients of variation (CV=σ/μ) were calculated for each slice. Wilcoxon matched-pairs and Mann-Whitney tests compared CV values between original and N3 images and between MR simulation and diagnostic MR images. Results: Significant differences in CV were detected between original and N3 images in both MRI simulation and diagnostic MRI groups (p=0.010, p=0.010). In addition, significant differences in CV were detected between original MR simulation and original and N3 diagnostic MR images (p=0.0256, p=0.0016). However, no significant differences in CV were detected between N3 MR simulation images and original or N3 diagnostic MR images, demonstrating the importance of correcting MR simulation images beyond pre-scan normalization prior to use in radiotherapy. Conclusions: Alternative RF coil configurations used in MRI simulation can Result in significant IINU differences compared to diagnostic MR images. The MNI N3 algorithm reduced MR simulation IINU to levels observed in diagnostic MR images. Funding provided by Advancing a Healthier Wisconsin.« less
ERIC Educational Resources Information Center
Silver, Steven S.
FMS/3 is a system for producing hard copy documentation at high speed from free format text and command input. The system was originally written in assembler language for a 12K IBM 360 model 20 using a high speed 1403 printer with the UCS-TN chain option (upper and lower case). Input was from an IBM 2560 Multi-function Card Machine. The model 20…
Origin of the sensitivity in modeling the glide behaviour of dislocations
Pei, Zongrui; Stocks, George Malcolm
2018-03-26
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Compact optical processor for Hough and frequency domain features
NASA Astrophysics Data System (ADS)
Ott, Peter
1996-11-01
Shape recognition is necessary in a broad band of applications such as traffic sign or work piece recognition. It requires not only neighborhood processing of the input image pixels but global interconnection of them. The Hough transform (HT) performs such a global operation and it is well suited in the preprocessing stage of a shape recognition system. Translation invariant features can be easily calculated form the Hough domain. We have implemented on the computer a neural network shape recognition system which contains a HT, a feature extraction, and a classification layer. The advantage of this approach is that the total system can be optimized with well-known learning techniques and that it can explore the parallelism of the algorithms. However, the HT is a time consuming operation. Parallel, optical processing is therefore advantageous. Several systems have been proposed, based on space multiplexing with arrays of holograms and CGH's or time multiplexing with acousto-optic processors or by image rotation with incoherent and coherent astigmatic optical processors. We took up the last mentioned approach because 2D array detectors are read out line by line, so a 2D detector can achieve the same speed and is easier to implement. Coherent processing can allow the implementation of tilers in the frequency domain. Features based on wedge/ring, Gabor, or wavelet filters have been proven to show good discrimination capabilities for texture and shape recognition. The astigmatic lens system which is derived form the mathematical formulation of the HT is long and contains a non-standard, astigmatic element. By methods of lens transformation s for coherent applications we map the original design to a shorter lens with a smaller number of well separated standard elements and with the same coherent system response. The final lens design still contains the frequency plane for filtering and ray-tracing shows diffraction limited performance. Image rotation can be done optically by a rotating prism. We realize it on a fast FLC- SLM of our lab as input device. The filters can be implemented on the same type of SLM with 128 by 128 square pixels of size, resulting in a total length of the lens of less than 50cm.
Zanotti-Fregonara, Paolo; Hines, Christina S; Zoghbi, Sami S; Liow, Jeih-San; Zhang, Yi; Pike, Victor W; Drevets, Wayne C; Mallinger, Alan G; Zarate, Carlos A; Fujita, Masahiro; Innis, Robert B
2012-11-15
Quantitative PET studies of neuroreceptor tracers typically require that arterial input function be measured. The aim of this study was to explore the use of a population-based input function (PBIF) and an image-derived input function (IDIF) for [(11)C](R)-rolipram kinetic analysis, with the goal of reducing - and possibly eliminating - the number of arterial blood samples needed to measure parent radioligand concentrations. A PBIF was first generated using [(11)C](R)-rolipram parent time-activity curves from 12 healthy volunteers (Group 1). Both invasive (blood samples) and non-invasive (body weight, body surface area, and lean body mass) scaling methods for PBIF were tested. The scaling method that gave the best estimate of the Logan-V(T) values was then used to determine the test-retest variability of PBIF in Group 1 and then prospectively applied to another population of 25 healthy subjects (Group 2), as well as to a population of 26 patients with major depressive disorder (Group 3). Results were also compared to those obtained with an image-derived input function (IDIF) from the internal carotid artery. In some subjects, we measured arteriovenous differences in [(11)C](R)-rolipram concentration to see whether venous samples could be used instead of arterial samples. Finally, we assessed the ability of IDIF and PBIF to discriminate depressed patients (MDD) and healthy subjects. Arterial blood-scaled PBIF gave better results than any non-invasive scaling technique. Excellent results were obtained when the blood-scaled PBIF was prospectively applied to the subjects in Group 2 (V(T) ratio 1.02±0.05; mean±SD) and Group 3 (V(T) ratio 1.03±0.04). Equally accurate results were obtained for two subpopulations of subjects drawn from Groups 2 and 3 who had very differently shaped (i.e. "flatter" or "steeper") input functions compared to PBIF (V(T) ratio 1.07±0.04 and 0.99±0.04, respectively). Results obtained via PBIF were equivalent to those obtained via IDIF (V(T) ratio 0.99±0.05 and 1.00±0.04 for healthy subjects and MDD patients, respectively). Retest variability of PBIF was equivalent to that obtained with full input function and IDIF (14.5%, 15.2%, and 14.1%, respectively). Due to [(11)C](R)-rolipram arteriovenous differences, venous samples could not be substituted for arterial samples. With both IDIF and PBIF, depressed patients had a 20% reduction in [(11)C](R)-rolipram binding as compared to control (two-way ANOVA: p=0.008 and 0.005, respectively). These results were almost equivalent to those obtained using 23 arterial samples. Although some arterial samples are still necessary, both PBIF and IDIF are accurate and precise alternatives to full arterial input function for [(11)C](R)-rolipram PET studies. Both techniques give accurate results with low variability, even for clinically different groups of subjects and those with very differently shaped input functions. Published by Elsevier Inc.